chash stringlengths 16 16 | content stringlengths 267 674k |
|---|---|
5dd8fa33ba8a5926 | From formulasearchengine
Jump to navigation Jump to search
Template:Pp-pc1 {{#invoke:Hatnote|hatnote}} Template:Pp-move-indef Template:Infobox particleThe electron is a subatomic particle, symbol Template:SubatomicParticle or Template:SubatomicParticle, with a negative elementary electric charge.[1] Electrons belong to the first generation of the lepton particle family,[2] and are generally thought to be elementary particles because they have no known components or substructure.[3] The electron has a mass that is approximately 1/1836 that of the proton.[4] Quantum mechanical properties of the electron include an intrinsic angular momentum (spin) of a half-integer value in units of ħ, which means that it is a fermion. Being fermions, no two electrons can occupy the same quantum state, in accordance with the Pauli exclusion principle.[2] Like all matter, electrons have properties of both particles and waves, and so can collide with other particles and can be diffracted like light. The wave properties of electrons are easier to observe with experiments than those of other particles like neutrons and protons because electrons have a lower mass and hence a higher De Broglie wavelength for typical energies.
Many physical phenomena involve electrons in an essential role, such as electricity, magnetism, and thermal conductivity, and they also participate in gravitational, electromagnetic and weak interactions.[5] An electron in space generates an electric field surrounding it. An electron moving relative to an observer generates a magnetic field. External magnetic fields deflect an electron. Electrons radiate or absorb energy in the form of photons when accelerated. Laboratory instruments are capable of containing and observing individual electrons as well as electron plasma using electromagnetic fields, whereas dedicated telescopes can detect electron plasma in outer space. Electrons have many applications, including electronics, welding, cathode ray tubes, electron microscopes, radiation therapy, lasers, gaseous ionization detectors and particle accelerators.
Interactions involving electrons and other subatomic particles are of interest in fields such as chemistry and nuclear physics. The Coulomb force interaction between positive protons inside atomic nuclei and negative electrons composes atoms. Ionization or changes in the proportions of particles changes the binding energy of the system. The exchange or sharing of the electrons between two or more atoms is the main cause of chemical bonding.[6] British natural philosopher Richard Laming first hypothesized the concept of an indivisible quantity of electric charge to explain the chemical properties of atoms in 1838;[7] Irish physicist George Johnstone Stoney named this charge 'electron' in 1891, and J. J. Thomson and his team of British physicists identified it as a particle in 1897.[8][9][10] Electrons can also participate in nuclear reactions, such as nucleosynthesis in stars, where they are known as beta particles. Electrons may be created through beta decay of radioactive isotopes and in high-energy collisions, for instance when cosmic rays enter the atmosphere. The antiparticle of the electron is called the positron; it is identical to the electron except that it carries electrical and other charges of the opposite sign. When an electron collides with a positron, both particles may be totally annihilated, producing gamma ray photons.
{{#invoke:see also|seealso}}
The ancient Greeks noticed that amber attracted small objects when rubbed with fur. Along with lightning, this phenomenon is one of humanity's earliest recorded experiences with electricity. [11] In his 1600 treatise De Magnete{{#invoke:Category handler|main}}, the English scientist William Gilbert coined the New Latin term electricus{{#invoke:Category handler|main}}, to refer to this property of attracting small objects after being rubbed. [12] Both electric and electricity are derived from the Latin ēlectrum{{#invoke:Category handler|main}} (also the root of the alloy of the same name), which came from the Greek word for amber, ἤλεκτρον{{#invoke:Category handler|main}} (ēlektron{{#invoke:Category handler|main}}).
In the early 1700s, Francis Hauksbee and French chemist Charles François de Fay independently discovered what they believed were two kinds of frictional electricity—one generated from rubbing glass, the other from rubbing resin. From this, Du Fay theorized that electricity consists of two electrical fluids, vitreous and resinous, that are separated by friction, and that neutralize each other when combined.[13] A decade later Benjamin Franklin proposed that electricity was not from different types of electrical fluid, but the same electrical fluid under different pressures. He gave them the modern charge nomenclature of positive and negative respectively.[14] Franklin thought of the charge carrier as being positive, but he did not correctly identify which situation was a surplus of the charge carrier, and which situation was a deficit.[15]
In 1891 Stoney coined the term electron to describe these elementary charges, writing later in 1894: "... an estimate was made of the actual amount of this most remarkable fundamental unit of electricity, for which I have since ventured to suggest the name electron".[18] The word electron is a combination of the words electr(ic) and (i)on.[19] The suffix -on which is now used to designate other subatomic particles, such as a proton or neutron, is in turn derived from electron.[20][21]
A round glass vacuum tube with a glowing circular beam inside
A beam of electrons deflected in a circle by a magnetic field[22]
The German physicist Johann Wilhelm Hittorf studied electrical conductivity in rarefied gases: in 1869, he discovered a glow emitted from the cathode that increased in size with decrease in gas pressure. In 1876, the German physicist Eugen Goldstein showed that the rays from this glow cast a shadow, and he dubbed the rays cathode rays.[23] During the 1870s, the English chemist and physicist Sir William Crookes developed the first cathode ray tube to have a high vacuum inside.[24] He then showed that the luminescence rays appearing within the tube carried energy and moved from the cathode to the anode. Furthermore, by applying a magnetic field, he was able to deflect the rays, thereby demonstrating that the beam behaved as though it were negatively charged.[25][26] In 1879, he proposed that these properties could be explained by what he termed 'radiant matter'. He suggested that this was a fourth state of matter, consisting of negatively charged molecules that were being projected with high velocity from the cathode.[27]
In 1896, the British physicist J. J. Thomson, with his colleagues John S. Townsend and H. A. Wilson,[9] performed experiments indicating that cathode rays really were unique particles, rather than waves, atoms or molecules as was believed earlier.[8] Thomson made good estimates of both the charge e and the mass m, finding that cathode ray particles, which he called "corpuscles," had perhaps one thousandth of the mass of the least massive ion known: hydrogen.[8][10] He showed that their charge to mass ratio, e/m, was independent of cathode material. He further showed that the negatively charged particles produced by radioactive materials, by heated materials and by illuminated materials were universal.[8][30] The name electron was again proposed for these particles by the Irish physicist George F. Fitzgerald, and the name has since gained universal acceptance.[25]
Robert Millikan
While studying naturally fluorescing minerals in 1896, the French physicist Henri Becquerel discovered that they emitted radiation without any exposure to an external energy source. These radioactive materials became the subject of much interest by scientists, including the New Zealand physicist Ernest Rutherford who discovered they emitted particles. He designated these particles alpha and beta, on the basis of their ability to penetrate matter.[31] In 1900, Becquerel showed that the beta rays emitted by radium could be deflected by an electric field, and that their mass-to-charge ratio was the same as for cathode rays.[32] This evidence strengthened the view that electrons existed as components of atoms.[33][34]
The electron's charge was more carefully measured by the American physicists Robert Millikan and Harvey Fletcher in their oil-drop experiment of 1909, the results of which were published in 1911. This experiment used an electric field to prevent a charged droplet of oil from falling as a result of gravity. This device could measure the electric charge from as few as 1–150 ions with an error margin of less than 0.3%. Comparable experiments had been done earlier by Thomson's team,[8] using clouds of charged water droplets generated by electrolysis,[9] and in 1911 by Abram Ioffe, who independently obtained the same result as Millikan using charged microparticles of metals, then published his results in 1913.[35] However, oil drops were more stable than water drops because of their slower evaporation rate, and thus more suited to precise experimentation over longer periods of time.[36]
Atomic theory
{{#invoke:see also|seealso}}
By 1914, experiments by physicists Ernest Rutherford, Henry Moseley, James Franck and Gustav Hertz had largely established the structure of an atom as a dense nucleus of positive charge surrounded by lower-mass electrons.[38] In 1913, Danish physicist Niels Bohr postulated that electrons resided in quantized energy states, with the energy determined by the angular momentum of the electron's orbits about the nucleus. The electrons could move between these states, or orbits, by the emission or absorption of photons at specific frequencies. By means of these quantized orbits, he accurately explained the spectral lines of the hydrogen atom.[39] However, Bohr's model failed to account for the relative intensities of the spectral lines and it was unsuccessful in explaining the spectra of more complex atoms.[38]
Chemical bonds between atoms were explained by Gilbert Newton Lewis, who in 1916 proposed that a covalent bond between two atoms is maintained by a pair of electrons shared between them.[40] Later, in 1927, Walter Heitler and Fritz London gave the full explanation of the electron-pair formation and chemical bonding in terms of quantum mechanics.[41] In 1919, the American chemist Irving Langmuir elaborated on the Lewis' static model of the atom and suggested that all electrons were distributed in successive "concentric (nearly) spherical shells, all of equal thickness".[42] The shells were, in turn, divided by him in a number of cells each containing one pair of electrons. With this model Langmuir was able to qualitatively explain the chemical properties of all elements in the periodic table,[41] which were known to largely repeat themselves according to the periodic law.[43]
In 1924, Austrian physicist Wolfgang Pauli observed that the shell-like structure of the atom could be explained by a set of four parameters that defined every quantum energy state, as long as each state was inhabited by no more than a single electron. (This prohibition against more than one electron occupying the same quantum energy state became known as the Pauli exclusion principle.)[44] The physical mechanism to explain the fourth parameter, which had two distinct possible values, was provided by the Dutch physicists Samuel Goudsmit and George Uhlenbeck. In 1925, Goudsmit and Uhlenbeck suggested that an electron, in addition to the angular momentum of its orbit, possesses an intrinsic angular momentum and magnetic dipole moment.[38][45] The intrinsic angular momentum became known as spin, and explained the previously mysterious splitting of spectral lines observed with a high-resolution spectrograph; this phenomenon is known as fine structure splitting.[46]
Quantum mechanics
{{#invoke:see also|seealso}} In his 1924 dissertation Recherches sur la théorie des quanta{{#invoke:Category handler|main}} (Research on Quantum Theory), French physicist Louis de Broglie hypothesized that all matter possesses a de Broglie wave similar to light.[47] That is, under the appropriate conditions, electrons and other matter would show properties of either particles or waves. The corpuscular properties of a particle are demonstrated when it is shown to have a localized position in space along its trajectory at any given moment.[48] Wave-like nature is observed, for example, when a beam of light is passed through parallel slits and creates interference patterns. In 1927, the interference effect was found in a beam of electrons by English physicist George Paget Thomson with a thin metal film and by American physicists Clinton Davisson and Lester Germer using a crystal of nickel.[49]
A symmetrical blue cloud that decreases in intensity from the center outward
De Broglie's prediction of a wave nature for electrons led Erwin Schrödinger to postulate a wave equation for electrons moving under the influence of the nucleus in the atom. In 1926, this equation, the Schrödinger equation, successfully described how electron waves propagated.[50] Rather than yielding a solution that determined the location of an electron over time, this wave equation also could be used to predict the probability of finding an electron near a position, especially a position near where the electron was bound in space, for which the electron wave equations did not change in time. This approach led to a second formulation of quantum mechanics (the first being by Heisenberg in 1925), and solutions of Schrödinger's equation, like Heisenberg's, provided derivations of the energy states of an electron in a hydrogen atom that were equivalent to those that had been derived first by Bohr in 1913, and that were known to reproduce the hydrogen spectrum.[51] Once spin and the interaction between multiple electrons were considered, quantum mechanics later made it possible to predict the configuration of electrons in atoms with higher atomic numbers than hydrogen.[52]
In 1928, building on Wolfgang Pauli's work, Paul Dirac produced a model of the electron – the Dirac equation, consistent with relativity theory, by applying relativistic and symmetry considerations to the hamiltonian formulation of the quantum mechanics of the electro-magnetic field.[53] To resolve some problems within his relativistic equation, in 1930 Dirac developed a model of the vacuum as an infinite sea of particles having negative energy, which was dubbed the Dirac sea. This led him to predict the existence of a positron, the antimatter counterpart of the electron.[54] This particle was discovered in 1932 by Carl Anderson, who proposed calling standard electrons negatrons, and using electron as a generic term to describe both the positively and negatively charged variants.
In 1947 Willis Lamb, working in collaboration with graduate student Robert Retherford, found that certain quantum states of hydrogen atom, which should have the same energy, were shifted in relation to each other, the difference being the Lamb shift. About the same time, Polykarp Kusch, working with Henry M. Foley, discovered the magnetic moment of the electron is slightly larger than predicted by Dirac's theory. This small difference was later called anomalous magnetic dipole moment of the electron. This difference was later explained by the theory of quantum electrodynamics, developed by Sin-Itiro Tomonaga, Julian Schwinger and Richard Feynman in the late 1940s.[55]
Particle accelerators
With the development of the particle accelerator during the first half of the twentieth century, physicists began to delve deeper into the properties of subatomic particles.[56] The first successful attempt to accelerate electrons using electromagnetic induction was made in 1942 by Donald Kerst. His initial betatron reached energies of 2.3 MeV, while subsequent betatrons achieved 300 MeV. In 1947, synchrotron radiation was discovered with a 70 MeV electron synchrotron at General Electric. This radiation was caused by the acceleration of electrons, moving near the speed of light, through a magnetic field.[57]
With a beam energy of 1.5 GeV, the first high-energy particle collider was ADONE, which began operations in 1968.[58] This device accelerated electrons and positrons in opposite directions, effectively doubling the energy of their collision when compared to striking a static target with an electron.[59] The Large Electron–Positron Collider (LEP) at CERN, which was operational from 1989 to 2000, achieved collision energies of 209 GeV and made important measurements for the Standard Model of particle physics.[60][61]
Confinement of individual electrons
Fundamental properties
The invariant mass of an electron is approximately [[Orders of magnitude (mass)#10-25 kg or less|Template:Val]] kilograms,[65] or Template:Val atomic mass units. On the basis of Einstein's principle of mass–energy equivalence, this mass corresponds to a rest energy of 0.511 MeV. The ratio between the mass of a proton and that of an electron is about 1836.[4][66] Astronomical measurements show that the proton-to-electron mass ratio has held the same value for at least half the age of the universe, as is predicted by the Standard Model.[67]
Electrons have an electric charge of Template:Val coulomb,[65] which is used as a standard unit of charge for subatomic particles, and is also called the elementary charge. This elementary charge has a relative standard uncertainty of Template:Val.[65] Within the limits of experimental accuracy, the electron charge is identical to the charge of a proton, but with the opposite sign.[68] As the symbol e is used for the elementary charge, the electron is commonly symbolized by Template:SubatomicParticle, where the minus sign indicates the negative charge. The positron is symbolized by Template:SubatomicParticle because it has the same properties as the electron but with a positive rather than negative charge.[64][65]
The electron has an intrinsic angular momentum or spin of Template:Frac.[65] This property is usually stated by referring to the electron as a [[spin-½|spin-Template:Frac]] particle.[64] For such particles the spin magnitude is Template:Frac ħ.[note 1] while the result of the measurement of a projection of the spin on any axis can only be ±Template:Frac. In addition to spin, the electron has an intrinsic magnetic moment along its spin axis.[65] It is approximately equal to one Bohr magneton,[69][note 2] which is a physical constant equal to Template:Val.[65] The orientation of the spin with respect to the momentum of the electron defines the property of elementary particles known as helicity.[70]
The electron has no known substructure.[3][71] and it is assumed to be a point particle with a point charge and no spatial extent.[2] In classical physics, the angular momentum and magnetic moment of an object depend upon its physical dimensions. Hence, the concept of a dimensionless electron possessing these properties might seem paradoxical and inconsistent to experimental observations in Penning traps which point to finite non-zero radius of the electron. A possible explanation of this paradoxical situation is given below in the "Virtual particles" subsection by taking into consideration the Foldy-Wouthuysen transformation. The issue of the radius of the electron is a challenging problem of the modern theoretical physics. The admission of the hypothesis of a finite radius of the electron is incompatible to the premises of the theory of relativity. On the other hand, a point-like electron (zero radius) generates serious mathematical difficulties due to the self-energy of the electron tending to infinity.[72] These aspects have been analyzed in detail by Dmitri Ivanenko and Arseny Sokolov.
Observation of a single electron in a Penning trap shows the upper limit of the particle's radius is 10−22 meters.[73] There is a physical constant called the "classical electron radius", with the much larger value of Template:Val, greater than the radius of the proton. However, the terminology comes from a simplistic calculation that ignores the effects of quantum mechanics; in reality, the so-called classical electron radius has little to do with the true fundamental structure of the electron.[74][note 3]
There are elementary particles that spontaneously decay into less massive particles. An example is the muon, which decays into an electron, a neutrino and an antineutrino, with a mean lifetime of Template:Val seconds. However, the electron is thought to be stable on theoretical grounds: the electron is the least massive particle with non-zero electric charge, so its decay would violate charge conservation.[75] The experimental lower bound for the electron's mean lifetime is Template:Val years, at a 90% confidence level.[76][77]
Quantum properties
Virtual particles
{{#invoke:main|main}} Physicists believe that empty space may be continually creating pairs of virtual particles, such as a positron and electron, which rapidly annihilate each other shortly thereafter.[79] The combination of the energy variation needed to create these particles, and the time during which they exist, fall under the threshold of detectability expressed by the Heisenberg uncertainty relation, ΔE · Δt ≥ ħ. In effect, the energy needed to create these virtual particles, ΔE, can be "borrowed" from the vacuum for a period of time, Δt, so that their product is no more than the reduced Planck constant, ħTemplate:Val. Thus, for a virtual electron, Δt is at most Template:Val.[80]
While an electron–positron virtual pair is in existence, the coulomb force from the ambient electric field surrounding an electron causes a created positron to be attracted to the original electron, while a created electron experiences a repulsion. This causes what is called vacuum polarization. In effect, the vacuum behaves like a medium having a dielectric permittivity more than unity. Thus the effective charge of an electron is actually smaller than its true value, and the charge decreases with increasing distance from the electron.[81][82] This polarization was confirmed experimentally in 1997 using the Japanese TRISTAN particle accelerator.[83] Virtual particles cause a comparable shielding effect for the mass of the electron.[84]
The interaction with virtual particles also explains the small (about 0.1%) deviation of the intrinsic magnetic moment of the electron from the Bohr magneton (the anomalous magnetic moment).[69][85] The extraordinarily precise agreement of this predicted difference with the experimentally determined value is viewed as one of the great achievements of quantum electrodynamics.[86]
The apparent paradox (mentioned above in the properties subsection) of a point particle electron having intrinsic angular momentum and magnetic moment can be explained by the formation of virtual photons in the electric field generated by the electron. These photons cause the electron to shift about in a jittery fashion (known as zitterbewegung),[87] which results in a net circular motion with precession. This motion produces both the spin and the magnetic moment of the electron.[2][88] In atoms, this creation of virtual photons explains the Lamb shift observed in spectral lines.[81]
An electron generates an electric field that exerts an attractive force on a particle with a positive charge, such as the proton, and a repulsive force on a particle with a negative charge. The strength of this force is determined by Coulomb's inverse square law.[89] When an electron is in motion, it generates a magnetic field.[78]:140 The Ampère-Maxwell law relates the magnetic field to the mass motion of electrons (the current) with respect to an observer. This property of induction supplies the magnetic field that drives an electric motor.[90] The electromagnetic field of an arbitrary moving charged particle is expressed by the Liénard–Wiechert potentials, which are valid even when the particle's speed is close to that of light (relativistic).
A graph with arcs showing the motion of charged particles
An inelastic collision between a photon (light) and a solitary (free) electron is called Compton scattering. This collision results in a transfer of momentum and energy between the particles, which modifies the wavelength of the photon by an amount called the Compton shift.[note 5] The maximum magnitude of this wavelength shift is h/mec, which is known as the Compton wavelength.[95] For an electron, it has a value of Template:Val.[65] When the wavelength of the light is long (for instance, the wavelength of the visible light is 0.4–0.7 μm) the wavelength shift becomes negligible. Such interaction between the light and free electrons is called Thomson scattering or Linear Thomson scattering.[96]
When electrons and positrons collide, they annihilate each other, giving rise to two or more gamma ray photons. If the electron and positron have negligible momentum, a positronium atom can form before annihilation results in two or three gamma ray photons totalling 1.022 MeV.[97][98] On the other hand, high-energy photons may transform into an electron and a positron by a process called pair production, but only in the presence of a nearby charged particle, such as a nucleus.[99][100]
In the theory of electroweak interaction, the left-handed component of electron's wavefunction forms a weak isospin doublet with the electron neutrino. This means that during weak interactions, electron neutrinos behave like electrons. Either member of this doublet can undergo a charged current interaction by emitting or absorbing a Template:SubatomicParticle and be converted into the other member. Charge is conserved during this reaction because the W boson also carries a charge, canceling out any net change during the transmutation. Charged current interactions are responsible for the phenomenon of beta decay in a radioactive atom. Both the electron and electron neutrino can undergo a neutral current interaction via a Template:SubatomicParticle exchange, and this is responsible for neutrino-electron elastic scattering.[101]
Atoms and molecules
Electrons can transfer between different orbitals by the emission or absorption of photons with an energy that matches the difference in potential.[102] Other methods of orbital transfer include collisions with particles, such as electrons, and the Auger effect.[103] To escape the atom, the energy of the electron must be increased above its binding energy to the atom. This occurs, for example, with the photoelectric effect, where an incident photon exceeding the atom's ionization energy is absorbed by the electron.[104]
The chemical bond between atoms occurs as a result of electromagnetic interactions, as described by the laws of quantum mechanics.[106] The strongest bonds are formed by the sharing or transfer of electrons between atoms, allowing the formation of molecules.[6] Within a molecule, electrons move under the influence of several nuclei, and occupy molecular orbitals; much as they can occupy atomic orbitals in isolated atoms.[107] A fundamental factor in these molecular structures is the existence of electron pairs. These are electrons with opposed spins, allowing them to occupy the same molecular orbital without violating the Pauli exclusion principle (much like in atoms). Different molecular orbitals have different spatial distribution of the electron density. For instance, in bonded pairs (i.e. in the pairs that actually bind atoms together) electrons can be found with the maximal probability in a relatively small volume between the nuclei. On the contrary, in non-bonded pairs electrons are distributed in a large volume around nuclei.[108]
Four bolts of lightning strike the ground
A lightning discharge consists primarily of a flow of electrons.[109] The electric potential needed for lightning may be generated by a triboelectric effect.[110][111]
When cooled below a point called the critical temperature, materials can undergo a phase transition in which they lose all resistivity to electrical current, in a process known as superconductivity. In BCS theory, this behavior is modeled by pairs of electrons entering a quantum state known as a Bose–Einstein condensate. These Cooper pairs have their motion coupled to nearby matter via lattice vibrations called phonons, thereby avoiding the collisions with atoms that normally create electrical resistance.[120] (Cooper pairs have a radius of roughly 100 nm, so they can overlap each other.)[121] However, the mechanism by which higher temperature superconductors operate remains uncertain.
Electrons inside conducting solids, which are quasi-particles themselves, when tightly confined at temperatures close to absolute zero, behave as though they had split into three other quasiparticles: spinons, Orbitons and holons.[122][123] The former carries spin and magnetic moment, the next carries its orbital location while the latter electrical charge.
Motion and energy
The plot starts at zero and curves sharply upward toward the right
where me is the mass of electron. For example, the Stanford linear accelerator can accelerate an electron to roughly 51 GeV.[125] Since an electron behaves as a wave, at a given velocity it has a characteristic de Broglie wavelength. This is given by λe = h/p where h is the Planck constant and p is the momentum.[47] For the 51 GeV electron above, the wavelength is about Template:Val, small enough to explore structures well below the size of an atomic nucleus.[126]
The Big Bang theory is the most widely accepted scientific theory to explain the early stages in the evolution of the Universe.[127] For the first millisecond of the Big Bang, the temperatures were over 10 billion Kelvin and photons had mean energies over a million electronvolts. These photons were sufficiently energetic that they could react with each other to form pairs of electrons and positrons. Likewise, positron-electron pairs annihilated each other and emitted energetic photons:
Template:SubatomicParticle + Template:SubatomicParticleTemplate:SubatomicParticle + Template:SubatomicParticle
For reasons that remain uncertain, during the process of leptogenesis there was an excess in the number of electrons over positrons.[129] Hence, about one electron in every billion survived the annihilation process. This excess matched the excess of protons over antiprotons, in a condition known as baryon asymmetry, resulting in a net charge of zero for the universe.[130][131] The surviving protons and neutrons began to participate in reactions with each other—in the process known as nucleosynthesis, forming isotopes of hydrogen and helium, with trace amounts of lithium. This process peaked after about five minutes.[132] Any leftover neutrons underwent negative beta decay with a half-life of about a thousand seconds, releasing a proton and electron in the process,
Template:SubatomicParticleTemplate:SubatomicParticle + Template:SubatomicParticle + Template:SubatomicParticle
For about the next Template:ValTemplate:Val, the excess electrons remained too energetic to bind with atomic nuclei.[133] What followed is a period known as recombination, when neutral atoms were formed and the expanding universe became transparent to radiation.[134]
Roughly one million years after the big bang, the first generation of stars began to form.[134] Within a star, stellar nucleosynthesis results in the production of positrons from the fusion of atomic nuclei. These antimatter particles immediately annihilate with electrons, releasing gamma rays. The net result is a steady reduction in the number of electrons, and a matching increase in the number of neutrons. However, the process of stellar evolution can result in the synthesis of radioactive isotopes. Selected isotopes can subsequently undergo negative beta decay, emitting an electron and antineutrino from the nucleus.[135] An example is the cobalt-60 (60Co) isotope, which decays to form nickel-60 (Template:SimpleNuclide2).[136]
A branching tree representing the particle production
When pairs of virtual particles (such as an electron and positron) are created in the vicinity of the event horizon, the random spatial distribution of these particles may permit one of them to appear on the exterior; this process is called quantum tunnelling. The gravitational potential of the black hole can then supply the energy that transforms this virtual particle into a real particle, allowing it to radiate away into space.[138] In exchange, the other member of the pair is given negative energy, which results in a net loss of mass-energy by the black hole. The rate of Hawking radiation increases with decreasing mass, eventually causing the black hole to evaporate away until, finally, it explodes.[139]
Cosmic rays are particles traveling through space with high energies. Energy events as high as Template:Val have been recorded.[140] When these particles collide with nucleons in the Earth's atmosphere, a shower of particles is generated, including pions.[141] More than half of the cosmic radiation observed from the Earth's surface consists of muons. The particle called a muon is a lepton produced in the upper atmosphere by the decay of a pion.
Template:SubatomicParticleTemplate:SubatomicParticle + Template:SubatomicParticle
A muon, in turn, can decay to form an electron or positron.[142]
Template:SubatomicParticleTemplate:SubatomicParticle + Template:SubatomicParticle + Template:SubatomicParticle
Aurorae are mostly caused by energetic electrons precipitating into the atmosphere.[143]
In laboratory conditions, the interactions of individual electrons can be observed by means of particle detectors, which allow measurement of specific properties such as energy, spin and charge.[104] The development of the Paul trap and Penning trap allows charged particles to be contained within a small region for long durations. This enables precise measurements of the particle properties. For example, in one instance a Penning trap was used to contain a single electron for a period of 10 months.[147] The magnetic moment of the electron was measured to a precision of eleven digits, which, in 1980, was a greater accuracy than for any other physical constant.[148]
Plasma applications
Particle beams
Electron beams are used in welding.[153] They allow energy densities up to Template:Val across a narrow focus diameter of 0.1–1.3 mm and usually require no filler material. This welding technique must be performed in a vacuum to prevent the electrons from interacting with the gas before reaching their target, and it can be used to join conductive materials that would otherwise be considered unsuitable for welding.[154][155]
Electron-beam lithography (EBL) is a method of etching semiconductors at resolutions smaller than a micrometer.[156] This technique is limited by high costs, slow performance, the need to operate the beam in the vacuum and the tendency of the electrons to scatter in solids. The last problem limits the resolution to about 10 nm. For this reason, EBL is primarily used for the production of small numbers of specialized integrated circuits.[157]
Electron beam processing is used to irradiate materials in order to change their physical properties or sterilize medical and food products.[158] Electron beams fluidise or quasi-melt glasses without significant increase of temperature on intensive irradiation: e.g. intensive electron radiation causes a many orders of magnitude decrease of viscosity and stepwise decrease of its activation energy.[159]
The electron microscope directs a focused beam of electrons at a specimen. Some electrons change their properties, such as movement direction, angle, and relative phase and energy as the beam interacts with the material. Microscopists can record these changes in the electron beam to produce atomically resolved images of the material.[166] In blue light, conventional optical microscopes have a diffraction-limited resolution of about 200 nm.[167] By comparison, electron microscopes are limited by the de Broglie wavelength of the electron. This wavelength, for example, is equal to 0.0037 nm for electrons accelerated across a 100,000-volt potential.[168] The Transmission Electron Aberration-Corrected Microscope is capable of sub-0.05 nm resolution, which is more than enough to resolve individual atoms.[169] This capability makes the electron microscope a useful laboratory instrument for high resolution imaging. However, electron microscopes are expensive instruments that are costly to maintain.
Other applications
In the free-electron laser (FEL), a relativistic electron beam passes through a pair of undulators that contain arrays of dipole magnets whose fields point in alternating directions. The electrons emit synchrotron radiation that coherently interacts with the same electrons to strongly amplify the radiation field at the resonance frequency. FEL can emit a coherent high-brilliance electromagnetic radiation with a wide range of frequencies, from microwaves to soft X-rays. These devices may find manufacturing, communication and various medical applications, such as soft tissue surgery.[173]
Electrons are important in cathode ray tubes, which have been extensively used as display devices in laboratory instruments, computer monitors and television sets.[174] In a photomultiplier tube, every photon striking the photocathode initiates an avalanche of electrons that produces a detectable current pulse.[175] Vacuum tubes use the flow of electrons to manipulate electrical signals, and they played a critical role in the development of electronics technology. However, they have been largely supplanted by solid-state devices such as the transistor.[176]
See also
{{#invoke:Portal|portal}} Template:Cmn
1. This magnitude is obtained from the spin quantum number as
for quantum number s = Template:Frac.
See: {{#invoke:citation/CS1|citation |CitationClass=book }}
2. Bohr magneton:
4. Radiation from non-relativistic electrons is sometimes termed cyclotron radiation.
5. The change in wavelength, Δλ, depends on the angle of the recoil, θ, as follows,
1. Template:Cite web
2. 2.0 2.1 2.2 2.3 {{#invoke:citation/CS1|citation |CitationClass=book }}
3. 3.0 3.1 {{#invoke:Citation/CS1|citation |CitationClass=journal }}
4. 4.0 4.1 Template:Cite web
6. 6.0 6.1 {{#invoke:citation/CS1|citation |CitationClass=book }}
8. 8.0 8.1 8.2 8.3 8.4 {{#invoke:Citation/CS1|citation |CitationClass=journal }}
9. 9.0 9.1 9.2 Dahl (1997:122–185).
10. 10.0 10.1 {{#invoke:citation/CS1|citation |CitationClass=book }}
11. {{#invoke:citation/CS1|citation |CitationClass=book }}
14. Template:Cite web
15. {{#invoke:citation/CS1|citation |CitationClass=book }}
19. "electron, n.2". OED Online. March 2013. Oxford University Press. Accessed 12 April 2013 [1]
22. {{#invoke:citation/CS1|citation |CitationClass=book }}
23. Dahl (1997:55–58).
25. 25.0 25.1 25.2 {{#invoke:citation/CS1|citation |CitationClass=book }}
26. Dahl (1997:64–78).
28. Dahl (1997:99).
29. Frank Wilczek: "Happy Birthday, Electron" Scientific American, June 2012.
30. Template:Cite web
32. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
33. Buchwald and Warwick (2001:90–91).
35. {{#invoke:Citation/CS1|citation |CitationClass=journal }} Original publication in Russian: {{#invoke:Citation/CS1|citation |CitationClass=journal }}
38. 38.0 38.1 38.2 {{#invoke:citation/CS1|citation |CitationClass=book }}
39. Template:Cite web
40. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
41. 41.0 41.1 {{#invoke:Citation/CS1|citation |CitationClass=journal }}
44. {{#invoke:citation/CS1|citation |CitationClass=book }}
47. 47.0 47.1 Template:Cite web
49. Template:Cite web
50. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
51. {{#invoke:citation/CS1|citation |CitationClass=book }}
52. {{#invoke:citation/CS1|citation |CitationClass=book }}
54. Template:Cite web
55. Template:Cite web
56. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
57. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
58. {{#invoke:citation/CS1|citation |CitationClass=book }}
59. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
60. Template:Cite web
61. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
62. Template:Cite doi
63. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
64. 64.0 64.1 64.2 {{#invoke:citation/CS1|citation |CitationClass=book }}
65. 65.0 65.1 65.2 65.3 65.4 65.5 65.6 65.7 65.8 The original source for CODATA is {{#invoke:Citation/CS1|citation |CitationClass=journal }}
Individual physical constants from the CODATA are available at: Template:Cite web Cite error: Invalid <ref> tag; name "CODATA" defined multiple times with different content
66. {{#invoke:citation/CS1|citation |CitationClass=book }}
67. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
68. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
69. 69.0 69.1 {{#invoke:Citation/CS1|citation |CitationClass=journal }}
70. {{#invoke:citation/CS1|citation |CitationClass=book }}
71. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
72. Eduard Shpolsky, Atomic physics (Atomnaia fizika),second edition, 1951
73. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
74. {{#invoke:citation/CS1|citation |CitationClass=book }}
75. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
76. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
78. 78.0 78.1 78.2 78.3 78.4 {{#invoke:citation/CS1|citation |CitationClass=book }}
79. Template:Cite web
80. {{#invoke:citation/CS1|citation |CitationClass=book }}
81. 81.0 81.1 {{#invoke:citation/CS1|citation |CitationClass=book }}
82. Template:Cite news
83. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
84. {{#invoke:citation/CS1|citation |CitationClass=conference }}—lists a 9% mass difference for an electron that is the size of the Planck distance.
85. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
87. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
88. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
89. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
91. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
92. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
94. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
95. Template:Cite web
96. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
97. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
99. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
100. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
101. {{#invoke:citation/CS1|citation |CitationClass=conference }}
102. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
104. 104.0 104.1 {{#invoke:Citation/CS1|citation |CitationClass=journal }}
105. {{#invoke:citation/CS1|citation |CitationClass=book }}
108. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
109. {{#invoke:citation/CS1|citation |CitationClass=book }}
110. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
111. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
114. {{#invoke:citation/CS1|citation |CitationClass=book }}
115. {{#invoke:citation/CS1|citation |CitationClass=book }}
116. 116.0 116.1 {{#invoke:citation/CS1|citation |CitationClass=book }}
117. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
118. {{#invoke:citation/CS1|citation |CitationClass=book }}
119. {{#invoke:citation/CS1|citation |CitationClass=book }}
120. Template:Cite web
121. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
122. Template:Cite web
123. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
124. Template:Cite web
125. Template:Cite web
126. {{#invoke:citation/CS1|citation |CitationClass=book }}
127. {{#invoke:citation/CS1|citation |CitationClass=book }}
128. {{#invoke:citation/CS1|citation |CitationClass=book }}
129. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
130. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
131. Template:Cite web
132. Template:Cite arXiv
133. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
134. 134.0 134.1 {{#invoke:Citation/CS1|citation |CitationClass=journal }}
135. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
136. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
137. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
138. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
139. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
140. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
141. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
142. Template:Cite news
143. Template:Cite news
144. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
145. Template:Cite web
146. {{#invoke:citation/CS1|citation |CitationClass=book }}
147. Template:Cite web
148. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
149. Template:Cite web
150. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
151. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
152. Template:Cite web
153. Template:Cite web
154. {{#invoke:citation/CS1|citation |CitationClass=book }}
155. {{#invoke:citation/CS1|citation |CitationClass=book }}
156. {{#invoke:citation/CS1|citation |CitationClass=conference }}
157. {{#invoke:citation/CS1|citation |CitationClass=book }}
158. {{#invoke:citation/CS1|citation |CitationClass=conference }}
159. Mobus G. et al. (2010). Journal of Nuclear Materials, v. 396, 264–271, doi:10.1016/j.jnucmat.2009.11.020
160. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
161. Template:Cite web
162. {{#invoke:citation/CS1|citation |CitationClass=book }}
163. {{#invoke:citation/CS1|citation |CitationClass=book }}
164. {{#invoke:citation/CS1|citation |CitationClass=book }}
165. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
166. Template:Cite web
167. {{#invoke:citation/CS1|citation |CitationClass=book }}
168. {{#invoke:citation/CS1|citation |CitationClass=book }}
169. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
170. {{#invoke:citation/CS1|citation |CitationClass=book }}
171. {{#invoke:citation/CS1|citation |CitationClass=book }}
172. {{#invoke:citation/CS1|citation |CitationClass=book }}
173. {{#invoke:citation/CS1|citation |CitationClass=book }}
174. {{#invoke:citation/CS1|citation |CitationClass=book }}
175. {{#invoke:citation/CS1|citation |CitationClass=book }}
176. Template:Cite web
Cite error: <ref> tag with name "buchwald1" defined in <references> is not used in prior text.
Cite error: <ref> tag with name "2010 CODATA" defined in <references> is not used in prior text.
External links
Template:Wikisource1911Enc Template:Sister
|CitationClass=book }}
Template:QED Template:Particles
Template:Featured article ml:ഇലക്ട്രോണ് |
a749e9b095847677 | Citation for this page in APA citation style. Close
Mortimer Adler
Rogers Albritton
Alexander of Aphrodisias
Samuel Alexander
William Alston
Louise Antony
Thomas Aquinas
David Armstrong
Harald Atmanspacher
Robert Audi
Alexander Bain
Mark Balaguer
Jeffrey Barrett
William Barrett
William Belsham
Henri Bergson
George Berkeley
Isaiah Berlin
Richard J. Bernstein
Bernard Berofsky
Robert Bishop
Max Black
Susanne Bobzien
Emil du Bois-Reymond
Hilary Bok
Laurence BonJour
George Boole
Émile Boutroux
Michael Burke
Lawrence Cahoone
Joseph Keim Campbell
Rudolf Carnap
Ernst Cassirer
David Chalmers
Roderick Chisholm
Randolph Clarke
Samuel Clarke
Anthony Collins
Antonella Corradini
Diodorus Cronus
Jonathan Dancy
Donald Davidson
Mario De Caro
Daniel Dennett
Jacques Derrida
René Descartes
Richard Double
Fred Dretske
John Dupré
John Earman
Laura Waddell Ekstrom
Herbert Feigl
Arthur Fine
John Martin Fischer
Frederic Fitch
Owen Flanagan
Luciano Floridi
Philippa Foot
Alfred Fouilleé
Harry Frankfurt
Richard L. Franklin
Michael Frede
Gottlob Frege
Peter Geach
Edmund Gettier
Carl Ginet
Alvin Goldman
Nicholas St. John Green
H.Paul Grice
Ian Hacking
Ishtiyaque Haji
Stuart Hampshire
Sam Harris
William Hasker
Georg W.F. Hegel
Martin Heidegger
Thomas Hobbes
David Hodgson
Shadsworth Hodgson
Baron d'Holbach
Ted Honderich
Pamela Huby
David Hume
Ferenc Huoranszki
William James
Lord Kames
Robert Kane
Immanuel Kant
Tomis Kapitan
Walter Kaufmann
Jaegwon Kim
William King
Hilary Kornblith
Christine Korsgaard
Saul Kripke
Andrea Lavazza
Christoph Lehner
Keith Lehrer
Gottfried Leibniz
Jules Lequyer
Michael Levin
George Henry Lewes
David Lewis
Peter Lipton
C. Lloyd Morgan
John Locke
Michael Lockwood
E. Jonathan Lowe
John R. Lucas
Alasdair MacIntyre
Ruth Barcan Marcus
James Martineau
Storrs McCall
Hugh McCann
Colin McGinn
Michael McKenna
Brian McLaughlin
John McTaggart
Paul E. Meehl
Uwe Meixner
Alfred Mele
Trenton Merricks
John Stuart Mill
Dickinson Miller
Thomas Nagel
Otto Neurath
Friedrich Nietzsche
John Norton
Robert Nozick
William of Ockham
Timothy O'Connor
David F. Pears
Charles Sanders Peirce
Derk Pereboom
Steven Pinker
Karl Popper
Huw Price
Hilary Putnam
Willard van Orman Quine
Frank Ramsey
Ayn Rand
Michael Rea
Thomas Reid
Charles Renouvier
Nicholas Rescher
Richard Rorty
Josiah Royce
Bertrand Russell
Paul Russell
Gilbert Ryle
Jean-Paul Sartre
Kenneth Sayre
Moritz Schlick
Arthur Schopenhauer
John Searle
Wilfrid Sellars
Alan Sidelle
Ted Sider
Henry Sidgwick
Walter Sinnott-Armstrong
Saul Smilansky
Michael Smith
Baruch Spinoza
L. Susan Stebbing
Isabelle Stengers
George F. Stout
Galen Strawson
Peter Strawson
Eleonore Stump
Francisco Suárez
Richard Taylor
Kevin Timpe
Mark Twain
Peter Unger
Peter van Inwagen
Manuel Vargas
John Venn
Kadri Vihvelin
G.H. von Wright
David Foster Wallace
R. Jay Wallace
Ted Warfield
Roy Weatherford
C.F. von Weizsäcker
William Whewell
Alfred North Whitehead
David Widerker
David Wiggins
Bernard Williams
Timothy Williamson
Ludwig Wittgenstein
Susan Wolf
Michael Arbib
Walter Baade
Bernard Baars
Leslie Ballentine
Gregory Bateson
John S. Bell
Mara Beller
Charles Bennett
Ludwig von Bertalanffy
Susan Blackmore
Margaret Boden
David Bohm
Niels Bohr
Ludwig Boltzmann
Emile Borel
Max Born
Satyendra Nath Bose
Walther Bothe
Hans Briegel
Leon Brillouin
Stephen Brush
Henry Thomas Buckle
S. H. Burbury
Donald Campbell
Anthony Cashmore
Eric Chaisson
Gregory Chaitin
Jean-Pierre Changeux
Arthur Holly Compton
John Conway
John Cramer
Francis Crick
E. P. Culverwell
Olivier Darrigol
Charles Darwin
Richard Dawkins
Terrence Deacon
Lüder Deecke
Richard Dedekind
Louis de Broglie
Stanislas Dehaene
Max Delbrück
Abraham de Moivre
Paul Dirac
Hans Driesch
John Eccles
Arthur Stanley Eddington
Gerald Edelman
Paul Ehrenfest
Albert Einstein
Hugh Everett, III
Franz Exner
Richard Feynman
R. A. Fisher
Joseph Fourier
Philipp Frank
Steven Frautschi
Edward Fredkin
Lila Gatlin
Michael Gazzaniga
GianCarlo Ghirardi
J. Willard Gibbs
Nicolas Gisin
Paul Glimcher
Thomas Gold
A. O. Gomes
Brian Goodwin
Joshua Greene
Jacques Hadamard
Mark Hadley
Patrick Haggard
Stuart Hameroff
Augustin Hamon
Sam Harris
Hyman Hartman
John-Dylan Haynes
Donald Hebb
Martin Heisenberg
Werner Heisenberg
John Herschel
Art Hobson
Jesper Hoffmeyer
E. T. Jaynes
William Stanley Jevons
Roman Jakobson
Pascual Jordan
Ruth E. Kastner
Stuart Kauffman
Martin J. Klein
William R. Klemm
Christof Koch
Simon Kochen
Hans Kornhuber
Stephen Kosslyn
Ladislav Kovàč
Leopold Kronecker
Rolf Landauer
Alfred Landé
Pierre-Simon Laplace
David Layzer
Benjamin Libet
Seth Lloyd
Hendrik Lorentz
Josef Loschmidt
Ernst Mach
Donald MacKay
Henry Margenau
James Clerk Maxwell
Ernst Mayr
John McCarthy
Warren McCulloch
George Miller
Ulrich Mohrhoff
Jacques Monod
Emmy Noether
Abraham Pais
Howard Pattee
Wolfgang Pauli
Massimo Pauri
Roger Penrose
Steven Pinker
Colin Pittendrigh
Max Planck
Susan Pockett
Henri Poincaré
Daniel Pollen
Ilya Prigogine
Hans Primas
Adolphe Quételet
Jürgen Renn
Juan Roederer
Jerome Rothstein
David Ruelle
Tilman Sauer
Jürgen Schmidhuber
Erwin Schrödinger
Aaron Schurger
Claude Shannon
David Shiang
Herbert Simon
Dean Keith Simonton
B. F. Skinner
Lee Smolin
Ray Solomonoff
Roger Sperry
John Stachel
Henry Stapp
Tom Stonier
Antoine Suarez
Leo Szilard
Max Tegmark
William Thomson (Kelvin)
Giulio Tononi
Peter Tse
Vlatko Vedral
Heinz von Foerster
John von Neumann
John B. Watson
Daniel Wegner
Steven Weinberg
Paul A. Weiss
John Wheeler
Wilhelm Wien
Norbert Wiener
Eugene Wigner
E. O. Wilson
Stephen Wolfram
H. Dieter Zeh
Ernst Zermelo
Wojciech Zurek
Konrad Zuse
Fritz Zwicky
Free Will
Mental Causation
James Symposium
The Problem of Microscopic Reversibility
Loschmidt's Paradox
In 1874, Josef Loschmidt criticized his younger colleague Ludwig Boltzmann's 1866 attempt to derive from basic classical dynamics the increasing entropy required by the second law of thermodynamics.
Increasing entropy is the intimate connection between time and the second law of thermodynamics that Arthur Stanley Eddington later called the Arrow of Time. (The fundamental arrow of time is the expansion of the universe, which makes room for all the other arrows.) Despite never seeing entropy decrease in an isolated system, attempts to "prove" that it always increases have been failures.
Loschmidt's criticism was based on the simple idea that the laws of classical dynamics are time reversible. Consequently, if we just turned the time around, the time evolution of the system should lead to decreasing entropy. Of course we cannot turn time around, but a classical dynamical system will evolve in reverse if all the particles could have their velocities exactly reversed. Apart from the practical impossibility of doing this, Loschmidt had shown that systems could exist for which the entropy should decrease instead of increasing. This is called Loschmidt's "Reversibility Objection" (Umwiederkehreinwand) or "Loschmidt's paradox." We call it the problem of microscopic reversibility.
We can visualize the free expansion of a gas that occurs when we rapidly withdraw a piston. Because this is a movie, we can reverse the movie to show what Loschmidt imagined would happen. But Boltzmann thought that even if the particles could all have their velocities reversed, minute errors in the collisions would likely prevent a perfect return to the original state.
Forward Time Time Reversal
To demonstrate the randomness in each collision, which Boltzmann described as "molecular disorder (molekular ungeordnet) we need a program that reverses the velocities of the gas particles, and adds randomness into the collisions. (This is a work in progress.)
Information physics claims that microscopic reversibility is actually extremely unlikely and that the intrinsic path information in particles needed to reduce entropy is erased by matter-radiation interactions or by internal quantum transitions in the colliding atoms considered as a "quasi"-molecule.
Microscopic time reversibility is one of the foundational assumptions of both classical mechanics and quantum mechanics. It is mistakenly thought to be the basis for the "detailed balancing" of chemical reactions in thermodynamic equilibrium. In fact microscopic reversibility is an assumption that is only statistically valid in the same limits as any "quantum to classical transition." This is the limit when the number of particles is large enough that we can average over quantum effects. Quantum events also approach classical behavior in the limit of large quantum numbers, which Niels Bohr called the "correspondence principle."
It may seem presumptuous for an information philosopher to challenge such a fundamental principle of statistical mechanics and even quantum statistical physics as microscopic reversibility.
What "detailed balancing" means is that in thermodynamic equilibrium, the number of forward reactions is exactly balanced by the number of reverse reactions. And this is correct. But microscopic reversibility, while still true when considering averages over time, should not be confused with the time reversibility of a specific individual collision between particles.
We will examine the collision of two atoms and show that if their velocities are reversed at some time after the collision, it is highly improbable that they will retrace their paths. This does not mean that, given enough particle collisions, there will not be statistically many collisions that are essentially the same as the "reverse collisions" needed for detailed balancing in chemical reactions, for transport processes with the Boltzmann equation, and for the Onsager reciprocal relations in non-equilibrium conditions.
The Origin of Irreversibility
Our careful quantum analysis shows that time reversibility fails even in the most ideal conditions (the simplest case of two particles in collision), provided internal quantum structure or the quantum-mechanical interaction with radiation is taken into account.
Albert Einstein was the first to see this, first in his 1909 extension of work on the photoelectric effect but especially in his 1916-17 work on the emission and absorption of radiation. This was the work in which Einstein showed that quantum theory implies ontological chance, which he famously disliked, ("God does not play dice!"). For Einstein, detailed balancing was not the result of microscopic reversibility, it was his starting assumption.
Einstein's work is sometimes cited as proof of detailed balancing and microscopic reversibility. (Wikipedia, for example.) In fact, Einstein used Boltzmann's assumption of detailed balancing, along with the "Boltzmann principle" that the probability of states with energy E is reduced by the exponential "Boltzmann factor," f(E) ∝ e-E/kT, to derive his transition probabilities for emission and absorption of radiation. Einstein also derived Planck's radiation law and Bohr's second "quantum postulate" Em - En = hν. But Einstein distinctly denied any symmetry in the elementary processes of emission and absorption.
As early as 1909, he noted that the elementary process of emission is not "invertible." There are outgoing spherical waves of radiation, but incoming spherical waves are never seen.
In a deterministic universe, the path information needed to predict the future motions of all particles would be preserved. If information is a conserved quantity, the future and the past are all contained in the present. The information about future paths is precisely the same information that, if reversed, would predict microscopic reversibility of each and every collision. The introduction of ontological probabilities and statistics would deny such determinism. If the motions of particles have a chance element, such determinism can not exist. And this is exactly what Einstein did in his papers on the emission and absorption of radiation by matter. He found that quantum theory implies ontological chance. A "weakness in the theory," he called it.
What we might call Einstein's "radiation asymmetry" was introduced with these words,
The elementary process of the emission and absorption of radiation is asymmetric, because the process is directed, as Einstein had explicitly noted first in 1909, and we think he had seen as early as 1905. The apparent isotropy of the emission of radiation is only what Einstein called "pseudo-isotropy" (Pseudoisotropie), a consequence of time averages over large numbers of events. Einstein often substituted time averages for space averages, or averages over the possible states of a system in statistical mechanics.
a quantum theory free from contradictions can only be obtained if the emission process, just as absorption, is assumed to be directional. In that case, for each elementary emission process Zm->Zn a momentum of magnitude (εm—εn)/c is transferred to the molecule. If the latter is isotropic, we shall have to assume that all directions of emission are equally probable.
If the molecule is not isotropic, we arrive at the same statement if the orientation changes with time in accordance with the laws of chance. Moreover, such an assumption will also have to be made about the statistical laws for absorption, (B) and (B'). Otherwise the constants Bmn and Bnm would have to depend on the direction, and this can be avoided by making the assumption of isotropy or pseudo-isotropy (using time averages).
Now the principle of microscopic reversibility is a fundamental assumption of statistical mechanics. It underlies the principle of "detailed balancing," which is critical to the understanding of chemical reactions. In thermodynamic equilibrium, the number of forward reactions is exactly balanced by the number of reverse reactions. But microscopic reversibility, while true in the sense of averages over time, should not be confused with the reversibility of individual collisions between molecules.
The equations of classical dynamics are reversible in time. And the deterministic Schrödinger equation of motion in quantum mechanics is also time reversible. But the interactions of photons and material particles like electrons and atoms are distinctly not reversible!
An explanation of microscopic irreversibility in atomic and molecular collisions would provide the needed justification for Ludwig Boltzmann's assumption of "molecular disorder" and strengthen his H-Theorem. This is what we hope to do.
In quantum mechanics, microscopic time reversibility is assumed true by most scientists because the deterministic Schrödinger equation itself is time reversible. But the Schrödinger equation only describes the deterministic time evolution of the probabilities of various quantum events, which are themselves not deterministic and not reversible.
When an actual event occurs, the probabilities of multiple possible events collapse to the actual occurrence of one event. In quantum mechanics, this is the irreversible collapse of the wave function that John von Neumann called "Process 1."
The possibility of quantum transitions between closely spaced vibrational and rotational energy levels in the "quasi-molecule' introduces indeterminacy in the future paths of the separate atoms. The classical path information needed to ensure the deterministic dynamical behavior has been partially erased. The memory of the past needed to predict the "determined" future has been lost.
Even assuming the practical impossibility of a perfect classical time reversal, in which we simply turn the two particles around, quantum physics would require two measurements to locate the two particles, followed by two state preparations to send them in the opposite direction. These could only be made within the precision of Heisenberg's uncertainty principle and so could not perfectly produce microscopic reversibility, which is thus only a classical idealization, like the idea of determinism..
Heisenberg indeterminacy puts calculable limits on the accuracy with which perfect reversed paths can be achieved.
Let us assume this impossible task can be completed, and it sends the two particles into the reverse collision paths. But on the return path, there is only a finite probability that a "sum over histories" calculation will produce the same (or exactly reversed) quantum transitions between vibrational and rotational states that occurred in the first collision.
Thus a quantum description of a two-particle collision establishes the microscopic irreversibility that Boltzmann sometimes described as his assumption of "molecular disorder." In his second (1877) derivation of the H-theorem, Boltzmann used a statistical approach and the molecular disorder assumption to get away from the time-reversibility assumptions of classical dynamics.
We must develop a deep insight into Einstein's asymmetry between light and matter, one that was appreciated as early as the 1880's by Max Planck's great mentor Gustave Kirchhoff, but was not understood in quantum mechanical terms until Einstein's understanding of nonlocality and the relation between waves and particles in 1909..
It is still ignored in quantum statistical mechanics by those who mistakenly think that the time reversible Schrödinger equation means microscopic interactions are reversible.
Maxwell and Boltzmann had shown that collisions between material particles, analyzed statistically, cause the distribution of positions and velocities to approach their equilibrium Maxwell-Boltzmann distribution.
A bit later, Kirchhoff and Planck knew that an extreme non-equilibrium distribution of radiation, for example a monochromatic radiation field, will remain out of equilibrium indefinitely. But if that radiation interacts with even the tiniest amount of matter, a speck of carbon black was their example, all the wavelengths of the spectrum - the Kirchhoff law - soon appear.
So we can say that the approach to equilibrium of a radiation field has the same origin of irreversibility as that of matter.
Radiation without matter cannot equilibrate. Photons do not interact, except at the extremely high energies where they can convert to matter and anti-matter.
Our new insight is that matter without radiation also cannot equilibrate in a way that escapes the reversibility and recurrence objections, as is taught in every textbook and review article on statistical mechanics to this day.
It is thus the irreversible interaction of the two, light and matter, photons and electrons, that lies behind the increase of entropy in the universe. The second law of thermodynamics would not explain the increase of entropy except for the microscopic irreversibility that we have shown to be the case.
Microscopic irreversibility not only explains the second law, it validates Boltzmann's brilliant assumption of "molecular disorder" to justify his statistical arguments.
Zermelo's paradox was a later criticism of Ludwig Boltzmann's attempt to derive the increasing entropy required by the second law of thermodynamics. It also involves time. Assuming infinite available time, a finite universe with fixed matter, energy, and information will at some point return to any given earlier state.
We now know that even a finite part of the universe cannot return to exactly the same state, because the surrounding universe will have aged and be in a different information state. This is the information philosophy solution to the problem of eternal recurrence, as seen by Arthur Stanley Eddington and H. Dieter Zeh.
The Origin of Irreversibility (pdf)
Microscopic Irreversibility, chapter 25 of Great Problems of Philosophy and Physics Solved?.
For Teachers
Chapter 5.7 - Recurrence Problem Chapter 5.9 - Universals
Part Four - Freedom Part Six - Solutions
Normal | Teacher | Scholar |
3be3dfa779cd905a | Field of Science
Book review: “Lithium: A Doctor, a Drug, and a Breakthrough” by Walter Brown
A fascinating book about a revolutionary medical discovery that has saved and treated millions of lives, was adopted with a lot of resistance and made by a most unlikely, simple man who was a master observer. Lithium is still the gold standard for bipolar disorder that affects millions of people, and it’s the unlikeliest of drugs - a simple ion that is abundant in the earth’s crust and is used in applications as diverse as iPhone batteries and hydrogen bombs. Even before the breakthrough antipsychotic drug chlorpromazine, lithium signaled the dawn of modern psychopharmacology in which chemical substances replaced Freudian psychoanalysis and primitive methods like electro-convulsive therapy as the first line of treatment for mental disorders.
The book describes how an unassuming Australian psychiatrist and Japanese POW named John Cade found out lithium’s profound effects on manic-depressive patients using a hunch and serendipity (which is better called “non-linear thinking”), some scattered historical evidence, primitive equipment (he kept urine samples in his family fridge) and a few guinea pigs. And then it describes how Danish psychiatrists like Mogens Schou had to fight uphill battles to convince the medical community that not only was lithium a completely revolutionary drug but also a prophylactic one.
The debates on lithium’s efficacy got personal at times but also shed light on how some of our most successful drugs did not always emerge from the most rigorous clinical trials, and how ethics can sometimes trump the design of these trials (for instance, many doctors find it unethical to continue to give patients a placebo if a therapy is found to be as immediately and powerfully impactful as lithium was). It is also a sobering lesson to realize in this era of multimillion dollar biotech companies and academic labs, how some of the most transformative therapies we know were discovered by lone individuals working with simple equipment and an unfettered mind.
Thanks to the work of these pioneers, lithium is still the gold standard, and it has saved countless lives from unbearable agony and debilitation, significantly because of its preventive effects. Patients who had been debilitated by manic-depression for decades showed an almost magical and permanent remission. Perhaps the most humane effect of lithium therapy was in drastically reducing the rate of suicides in bipolar patients in whom the rate is 10 to 20 times higher compared to the general population.
The book ends with some illuminating commentary about why lithium is still not used often in the US, largely because as a common natural substance it is unpatentable and therefore does not lend itself to Big Pharma’s aggressive marketing campaigns. The common medication for treating bipolar disorder in the US is valproate combined with other drugs, but these don't come without side effects.
Stunningly, even after decades of use we still don’t know exactly how it works, partly because we also don’t know the exact causes of bipolar disorder. Unlike most psychiatric drugs, lithium clearly has general, systemic effects, and this makes its mechanism of action difficult to figure out. Somewhat contrary to this fact, it strangely also seems to be unique efficacious in treating manic-depression and not other psychiatric problems. What could account for this paradoxical mix of general systemic effects and efficacy in a very specific disorder? There are no doubt many hidden surprises hidden in future lithium research, but it all started with an Australian doctor acting on a simple hunch, derived from treating patients in a POW camp in World War 2, that a deficiency of something must be causing manic-depressive illness.
I highly recommended this book, both as scientific history and as a unique example of a groundbreaking medical discovery.
Spooky factions at a distance
For me, a highlight of an otherwise ill-spent youth was reading mathematician John Casti’s fantastic book “Paradigms Lost“. The book came out in the late 1980s and was gifted to my father who was a professor of economics by an adoring student. Its sheer range and humor had me gripped from the first page. Its format is very unique – Casti presents six “big questions” of science in the form of a courtroom trial, advocating arguments for the prosecution and the defense. He then steps in as jury to come down on one side or another. The big questions Casti examines are multidisciplinary and range from the origin of life to the nature/nurture controversy to extraterrestrial intelligence to, finally, the meaning of reality as seen through the lens of the foundations of quantum theory. Surprisingly, Casti himself comes down on the side of the so-called many worlds interpretation (MWI) of quantum theory, and ever since I read “Paradigms Lost” I have been fascinated by this analysis.
So it was with pleasure and interest that I came across Sean Carroll’s book that also comes down on the side of the many worlds interpretation. The MWI goes back to the very invention of quantum theory by pioneering physicists like Niels Bohr, Werner Heisenberg and Erwin Schrödinger. As exemplified by Heisenberg’s famous uncertainty principle, quantum theory signaled a striking break with reality by demonstrating that one can only talk about the world only probabilistically. Contrary to common belief, this does not mean that there is no precision in the predictions of quantum mechanics – it’s in fact the most accurate scientific framework known to science, with theory and experiment agreeing to several decimal places – but rather that there is a natural limit and fuzziness in how accurately we can describe reality. As Bohr put it, “physics does not describe reality; it describes reality as subjected to our measuring instruments and observations.” This is actually a reasonable view – what we see through a microscope and telescope obviously depends on the features of that particular microscope or telescope – but quantum theory went further, showing that the uncertainty in the behavior of the subatomic world is an inherent feature of the natural world, one that doesn’t simply come about because of uncertainty in experimental observations or instrument error.
At the heart of the probabilistic framework of quantum theory is the wave function. The wave function is a mathematical function that describes the state of the system, and its square gives a measure of the probability of what state the system is in. The controversy starts right away with this most fundamental entity. Some people think that the wave function is “epistemic”, in the sense that it’s not a real object and is simply related to our knowledge – or our ignorance – of the system. Others including Carroll think it’s “ontological”, in the sense of being a real entity that describes features of the system. The fly in the ointment concerns the act of actually measuring this wave function and therefore the state of a quantum system, and this so-called “measurement problem” is as old as the theory itself and kept even the pioneers of quantum theory awake.
The problem is that once a quantum system interacts with an “observer”, say a scintillation screen or a particle accelerator, its wave function “collapses” because the system is no longer described probabilistically and we know for certain what it’s like. But this raises two problems: Firstly, how do you exactly describe the interaction of a microscopic system with a macroscopic object like a particle accelerator? When exactly does the wave function “collapse”, by what mechanism and in what time interval? And who can collapse the wave function? Does it need to be human observers for instance, or can an ant or a computer do it? What can we in fact say about the consciousness of the entity that brings about its collapse?
The second problem is that contrary to popular belief, quantum theory is not just a theory of the microscopic world – it’s a theory of everything except gravity (for now). This led Erwin Schrödinger to postulate his famous cat paradox which demonstrated the problems inherent in the interpretation of the theory. Before measurement, Schrödinger said, a system is deemed to exist in a superposition of states while after measurement it exists only in one; does this mean that macroscopic objects like cats also exist in a superposition of entangled states, in case of his experiment in a mixture of half dead-half alive states? The possibility bothered Schrödinger and his friend Einstein to no end. Einstein in particular refused to believe that quantum theory was the final word, and there must be “hidden variables” that would allow us to get rid of the probabilities if only we knew what they were; he called the seemingly instantaneous entanglement of quantum states “spooky action at a distance”. Physicist John Bell put that particular objection to rest in the 1960s, proving that at least local quantum theories could not be based on hidden variables.
Niels Bohr and his group of followers from Copenhagen were more successful in their publicity campaign. They simply declared the question of what is “real” before measurement irrelevant and essentially pushed the details of the measurement problem under the rug by saying that the act of observation makes something real. The cracks were evident even then – the physicist Robert Serber once pointedly pointed out problems with putting the observer on a pedestal by asking if we might regard the Big Bang unreal because there were no observers back then. But Bohr and his colleagues were widespread and rather zealous, and most attempts by physicists like Einstein and David Bohm met with either derision or indifference.
Enter Hugh Everett who was a student of John Wheeler at Princeton. Everett essentially applied Occam’s Razor to the problem of collapse and asked a provocative question: What are the implications if we simply assume that the wave function does not collapse? While this avoids asking about the aforementioned complications with measurement, it creates problems of its own since we know for a fact that we can observe only one reality (dead vs alive cat, an electron track here rather than there) while the wave function previously described a mixture of realities. This is where Everett made a bold and revolutionary proposal, one that was as courageous as Einstein’s proposal of the constancy of the speed of light: he surmised that when there is a measurement, the other realities encoded in the wavefunction split off from our own. They simply don’t collapse and are every bit as real as our own. Just like Einstein showed in his theory of relativity that there are no privileged observers, Everett conjectured that there are no privileged observer-created realities. This is the so-called many-worlds interpretation of quantum mechanics.
Everett proposed this audacious claim in his PhD thesis in 1957 and showed it to Wheeler. Wheeler was an enormously influential physicist, and while he was famous for outlandish ideas that influenced generations of physicists like Richard Feynman and Kip Thorne, he was also a devotee of Bohr’s Copenhagen school – he and Bohr had published a seminal paper explaining nuclear fission way back in 1939, and Wheeler regarded Bohr’s Delphic pronouncements akin to those of Confucius – that posited observer-generated reality. He was sympathetic to Everett but could not support him in the face of Bohr’s objections. Everett soon left theoretical physics and spent the rest of his career doing nuclear weapons research, a chain-smoking, secretive, absentee father who dropped dead of an unhealthy lifestyle in 1982. After a brief resurrection by Everett himself at a conference organized by Wheeler, many-worlds didn’t see much popular dissemination until writers like Casti and the physicist David Deutsch wrote about it.
As Carroll indicates, the MWI has a lot of things going for it. It avoids the prickly, convoluted details of what exactly constitutes a measurement and the exact mechanism behind it; it does away with especially thorny details of what kind of consciousness can collapse a wavefunction. It’s elegant and satisfies Occam’s Razor because it simply postulates two entities – a wave function and a Schrödinger equation through which the wave function evolves through time, and nothing else. One can calculate the likelihood of each of the “many worlds” by postulating a simple rule proposed by Max Born that assigns a weight to every probability. And it also avoids an inconvenient split between the quantum and the classical world, treating both systems quantum mechanically. According to the MWI, when an observer interacts with an electron, for instance, the observer’s wave function becomes entangled with the electron’s and continues to evolve. The reason why we still see only one Schrödinger’s cat (dead or alive) is because each one is triggered by distinct random events like the passage of photons, leading to separate outcomes. Carroll thus sees many-worlds as basically a logical extension of the standard machinery of quantum theory. In fact he doesn’t even see the many worlds as “emerging” (although he does see them as emergent); he sees them as always present and intrinsically encoded in the wave function’s evolution through the Schrödinger equation.
A scientific theory is of course only as good as its experimental predictions and verification – as a quote ascribed to Ludwig Boltzmann puts it, matters of elegance should be left to the tailor and the cobbler. Does MWI postulate elements of reality that are different from those postulated by other interpretations? The framework is on shakier ground here since there are no clear observable predictions except those predicted by standard quantum theory that would truly privilege it over others. Currently it seems that the best we can say is that many worlds is consistent with many standard features of quantum mechanics. But so are many other interpretations. To be accepted as a preferred interpretation, a theory should not just be consistent with experiment, but uniquely so. For instance, consider one of the very foundations of quantum theory – wave-particle duality. Wave-particle duality is as counterintuitive and otherworldly as any other concept, but it’s only by postulating this idea that we can ever make sense of disparate experiments verifying quantum mechanics, experiments like the double-slit experiment and the photoelectric effect. If we get rid of wave-particle duality from our lexicon of quantum concepts, there is no way we can ever interpret the results of thousands of experiments from the subatomic world such as particle collisions in accelerators. There is thus a necessary, one-to-one correspondence between wave-particle duality and reality. If we get rid of many-worlds, however, it does not make any difference to any of the results of quantum theory, only to what we believe about them. Thus, at least as of now, many-worlds remains a philosophically pleasing framework than a preferred scientific one.
Many-worlds also raises some thorny questions about the multiple worlds that it postulates. Is it really reasonable to believe that there are literally an infinite copies of everything – not just an electron but the measuring instrument that observes it and the human being who records the result – splitting off every moment? Are there copies of me both writing this post and not writing it splitting off as I type these words? Is the universe really full of these multiple worlds, or does it make more sense to think of infinite universes? One reasonable answer to this question is to say that quantum theory is a textbook example of how language clashes with mathematics. This was well-recognized by the early pioneers like Bohr: Bohr was fond of an example where a child goes into a store and asks for some mixed sweets. The shopkeeper gives him two sweets and asks him to mix them himself. We might say that an electron is in “two places at the same time”, but any attempt to actually visualize this dooms us, because the only notion of objects existing in two places is one that is familiar to us from the classical world, and the analogy breaks down when we try to replace chairs or people with electrons. Visualizing an electron spinning on its axis the way the earth spins on its is also flawed.
Similarly, visualizing multiple copies of yourself actually splitting off every nanosecond sounds outlandish, but it’s only because that’s the only way for us to make sense of wave functions entangling and then splitting. Ultimately there’s only the math, and any attempts to cast it in the form of everyday language is a fundamentally misguided venture. Perhaps when it comes to talking about these things, we will have to resort to Wittgenstein’s famous quote – whereof we cannot speak, thereof we must be silent (or thereof we must simply speak in the form of pictures, as Wittgenstein did in his famous ‘Tractatus’). The other thing one can say about many-worlds is that while it does apply Occam’s Razor to elegantly postulating only the wave function and the Schrödinger equation, it raises questions about the splitting off process and the details of the multiple worlds that are similar to those about the details of measurement raised by the measurement problem. In that sense it only kicks the can of complex worms down the road, and in that case believing what particular can to open is a matter of taste. As an old saying goes, nature does not always shave with Occam’s Razor.
In the last part of the book, Carroll talks about some fascinating developments in quantum gravity, mainly the notion that gravity can emerge through microscopic degrees of freedom that are locally entangled with each other. One reason why this discussion is fascinating is because it connects many disparate ideas from physics into a potentially unifying picture – quantum entanglement, gravity, black holes and their thermodynamics. These developments don’t have much to do with many-worlds per se, but Carroll thinks they may limit the number of “worlds” that many worlds can postulate. But it’s frankly difficult to see how one can find definitive experimental evidence for any interpretation of quantum theory anytime soon, and in that sense Richard Feynman’s famous words, “I think it is safe to say that nobody understands quantum mechanics” may perpetually ring true.
Very reasonably, many-worlds is Carroll’s preferred take on quantum theory, but he’s not a zealot about it. He fully recognizes its limitations and discuss competing interpretations. But while Carroll deftly dissects many-worlds, I think that the real value of this book is to exhort physicists to take what are called the foundations of quantum mechanics more seriously. It is an attempt to make peace between different quantum factions and bring philosophers into the fold. There’s a huge number of “interpretations” of quantum theory, some more valid than others, being separated by each other as much by philosophical differences as by physical ones. There was a time when the spectacular results of quantum theory combined with the thorny philosophical problems it raised led to a tendency among physicists to “shut up and calculate” and not worry about philosophical matters. But philosophy and physics have been entwined since the ancient Greeks, and in one sense, one ends where the other begins. Carroll’s book is a hearty reminder for physicists and philosophers to eat at the same table, otherwise they may well remain spooky factions at a distance when it comes to interpreting quantum theory.
A new paper on kinase inhibitor discovery: not one on "drugs", and not one on an "AI breakthrough"
There is a new multicenter study on the discovery of some new kinase inhibitor compounds for the kinase DDR1 that has been making the rounds. Using a particular flavor of generative models, the authors derive a few potent and selective inhibitors for DDR1, a kinase target that has been implicated in fibrosis.
The paper is an interesting application of generative deep learning models to kinase inhibitor discovery. The authors start with six training datasets including ZINC and several patents along with a negative dataset of non-kinase inhibitors. After using their generative reinforcement learning model and filtering out reactives and clustering, they select 40 random molecules that have a less than 0.5 Tanimoto similarity to vendor stocks and the patent literature, and pick 6 out of these for testing. Four of the six compounds are indicated as showing an improvement in the potency against DDR1, although it seems that for two of these, the potency is little improved relative to the parent compound (10 and 21 nM vs 15 nM, which is well within the two or threefold margin of error in most biological assays). The selectivity of two of the compounds for the undesirable isoform DDR2 is also essentially the same (649 nM vs 1000 nM and 278 nM vs 162 nM; again within the twofold error margin of the assay). So from a potency standpoint, the algorithm seems to find equipotent inhibitors at best; given that these four molecules were culled from a starting set of 30,000, that indicates a hit rate of 0.01%. Good selectivity against a small kinase panel is demonstrated, but selectivity against a larger panel of off-targets is not indicated. There also don't seem to be tests for aggregation or non-specific behavior; computational techniques in drug discovery are well known to produce a surfeit of false positives. It would also be really helpful to get some SAR for these compounds to know if they are on-off non-specific binders or actual lead compounds.
Now, even equipotent inhibitors can be useful if they show good ADME properties or evidence scaffold hops. The group tested the inhibitors in liver microsomal assays, and they seem to have similar stability as a group of non-kinase inhibitor controls, although it would be good to see some accompanying data for DDR inhibitors next to this data. They also tested one of the compounds in a rodent model, and it seems to show satisfactory half lives; it's again not clear how these compare to other DDR inhibitors. Finally, they build a pharmacophore-based binding model of the inhibitor and compare it to a similar quantum mechanical model, but there is no experimental data (from NMR or mutagenesis for instance) which would allow a good experimental validation of this binding pose. Pharmacophore models are again notorious for producing false positives, and it's important to demonstrate that the pharmacophore in fact does not also fit the negative data.
The paper claims to have discovered the inhibitors "in 21 days" and tested them in 46. The main issue here - and this is by no means a critique of just this paper - is not that the discovered inhibitors show very modest improvement at best over the reference; it's that there is no baseline comparison, no null models, that can tell us what the true value of the technique is. This has been a longstanding complaint in the computational community. For instance, could regular docking followed by manual picking have found the same compounds in the same time? What about simple comparisons with property-based metrics or 2D metrics? And could a team of expert medicinal chemists brainstorming over beer have looked at the same data and come up with the same conclusions much sooner? I am glad that the predictions were actually tested - even this simple follow-up is often missing from computational papers - but 21 days is not as short as it sounds if you start with a vast amount of already-existing and curated data from databases and patents, and if simpler techniques can find the same results sooner. And the reliance on vast amounts of data is of course a well-known Achilles heel for deep learning techniques, so these techniques will almost certainly not work well on new targets with a paucity of data.
Inhibitor discovery is hardly a new problem for computational techniques, and any new method is up against a whole phalanx of structure and ligand-based methods that have been developed over the last 30+ years. There's a pretty steep curve to surmount if you actually want to proclaim your latest and greatest AI technique as a novel application. As it stands, the issue is not that the generative methods didn't discover anything, it's that it's impossible to actually judge their value because of an absence of baseline comparisons.
The AI hype machine is out in absolute full force on this one (see herehere and especially here for instance). I simply don't understand this great desire to proclaim every advance in a field as a breakthrough without simply calling it a useful incremental step or constructively criticizing it. And when respected sources like WIRED and Forbes proclaim that there's been a breakthrough in new drug discovery, the non-scientific public which is unfamiliar with IC50 curves or selectivity profiles or the fact that there's a huge difference between a drug and a lead will likely think that a new age of drug discovery is upon us. There's enough misleading hype about AI to go around, and adding more to the noise does both the scientific and the non-scientific community a disservice.
Longtime cheminformatics expert Andreas Bender has some similar thoughts here, and of course, Derek at In the Pipeline has an excellent, detailed take here. |
88640c6cf6df7c7e | Continuous quantum
David Tong argues that quantum mechanics is ultimately continuous, not discrete.
In other words, integers are not inputs of the theory, as Bohr thought. They are outputs. The integers are an example of what physicists call an emergent quantity. In this view, the term “quantum mechanics” is a misnomer. Deep down, the theory is not quantum. In systems such as the hydrogen atom, the processes described by the theory mold discreteness from underlying continuity. … The building blocks of our theories are not particles but fields: continuous, fluidlike objects spread throughout space. … The objects we call fundamental particles are not fundamental. Instead they are ripples of continuous fields.
Source: The Unquantum Quantum, Scientific American, December 2012.
11 thoughts on “Continuous quantum
1. Well, yeah. The Schrödinger equation, for instance, is a differential equation whose solutions are continuous wave functions on the complex numbers. But those solutions do not transform continuously from one to another, and their energy eigenvalues are discrete. That discreteness emerges from the possible solutions to the Schrödinger equation; it’s not put there ab initio.
2. Fred: I’m no expert in such things, but I believe what David Tong is arguing is that discrete effects in quantum mechanics, including the ones you allude to, are consequences of a continuous theory, so the continuous theory is more fundamental.
This reminds me of the quip that all computers are analog computers because digital computers are made out of analog parts. But those analog parts depend on discrete quantum phenomena. But those discrete quantum phenomena depend on continuous fields …
One of the themes of modern math is that continuous and discrete as not such discrete categories as once thought. We’re continually finding connections between the two perspectives. (Puns intended.)
3. I found similar remarks on Lubos Motl’s blog:
“There is strong scientific evidence today that the world isn’t discrete (and it isn’t simulated).
We do encounter integers and discrete mathematical structures in physics but in all the cases, we may see that they’re derived or emergent. They’re just limited discrete aspects of a more general and more fundamental underlying continuous structure, or they’re a rewriting of a continuous structure into discrete variables (eigenstates in a discrete spectrum) which makes it impossible to understand the value of certain parameters.
Quite generally, if the Universe were fundamentally discontinuous, it couldn’t have continuous symmetries such as the rotational symmetry, the Lorentz symmetry, and even descriptions in terms of gauge symmetries (which aren’t real full-fledged symmetries but redundancies) would be impossible. In a fundamentally discrete world, many (or infinitely many) continuous parameters would have to be precisely fine-tuned for the product to “look” invariant under the continuous transformations.”
4. It gets really tricky because
“discrete quantum phenomena depend on continuous fields ”
where “continuous fields” isn’t something directly observable, but more of a mathematical tool.
Now it’s possible that there is some fundamental connection between the math tools and reality.
But I thought that, in general, dealing with continuum at a fundamental level is complicated because all sorts of infinities start to creep in.
5. Fred: You could say that whether or not nature is continuous, the quantum mechanical model of nature is continuous.
6. The math model is just a tool, their concepts are not the reality…
Think of classical mechanics. Everybody know Newton’s laws, based on the concept of force, but there are two, more modern, alternative formulations of classical mechanics (Lagrangian and Hamiltonian), they bypass the concept of “force”, instead use energy, and momentum… Their equations look very different too.
Now, if you don’t know the other formulations, you could argue that the nature of the universe is ultimatelly linked to the concept of “force”, because you see it all over, in the hard of your equations…
What I mean is, it’s perfectly possible that somebody device some day a formulation of quantum mechanics that begins with discrete fields… as long as it arrives to the same observable results, it’s correct!
7. Excellent. So it’s not just turtles all the way down. It’s like turtle and taffy and then turtle and taffy…
8. Thank you Cade! Yours is the only comment I understand! Actually, I didn’t understand the article either. I just have to be continuously discrete about my lack of knowledge.
9. Quantum physics is quantum because the observable values are quantised, not because anything is discrete in the fields. That was the surprise, and why it got its name; classical physics is all continuous, and quantum physics just came up with quantisation in the middle, entirely unexpectedly.
The wave/particle duality is not so much about things being discrete on a fundamental level, but about fields and interactions being continuous yet only observable in discrete quantities. It is quantum as opposed to entirely continuous.
If we had only had discrete theories before that point, it would have been called Continuous Physics.
Comments are closed. |
58c5ca5b0a98b659 | « · »
Section 14.4: Molecular Models and Molecular Spectra
Please wait for the animation to completely load.
When we describe molecules, we do so by describing three energy scales: electronic, rotational, and vibrational. The electronic energy scale is the largest, on the order of the atomic energy scale of a few eV. The rotational energy scale is far less than that of the electronic, but on a larger than the vibrational energy scale. Both these scales are between 100 and 1000 times smaller than the electronic energy scale.
The rotational energy scale for a diatomic molecule is modeled quite well by the three-dimensional rigid rotator. This situation can be modeled quantum mechanically by considering the three-dimensional time-independent Schrödinger equation with a fixed radius, R,
ħ2/2μ [(1/R2sin(θ)) ∂/∂θ(sin(θ) ∂/∂θ) + 1/(R2sin2(θ)) ∂2/∂φ2 ] ψ(θ,φ) = Eψ(θ,φ) , (14.8)
which, after dividing by −ħ2/2μ, looks exactly like Eq. (13.22), with the substitution
E = l(l + 1)ħ2/2μR2 = l(l + 1)ħ2/2Imolecule,
where Imolecule = μR2, which gives us the energy spectrum for a three-dimensional rigid rotator. The vibrational energy scale is modeled by a simple harmonic oscillator and hence E = (n + 1/2)ħω.
In the animation, we show three important molecular potentials that closely model the rotational modes:
Kratzer Potential: V(r) = -2V0(α/r − α/2r2)
Morse Potential: V(r) = V0(e-2αr − 2er)
Lennard-Jones Potential: V(r) = V0(α/r12 − α/r6)
where α is a tunable parameter. These potential energy functions are shown in Animation 1Animation 2 , and Animation 3, respectively. To see the other bound states, simply click-drag in the energy level diagram on the left to select a level. The selected level will turn red.
Transitions between energy levels are also of importance and can be calculated once the energy levels are calculated. As an example of molecular spectra, shown in Animation 4 is an approximation to the P and R branches of the CO2 vibrational spectrum.
The OSP Network:
Open Source Physics - Tracker - EJS Modeling
Physlet Physics
Physlet Quantum Physics |
2a4fced71f46ea0d | vortex - DOC
Document Sample
vortex - DOC Powered By Docstoc
From Wikipedia, the free encyclopedia
Jump to: navigation, search
For other uses, see Vortex (disambiguation).
precise citations where appropriate. (April 2010)
called a vortex. The speed and rate of rotation of the fluid in a free (irrotational) vortex are
greatest at the center, and decrease progressively with distance from the center, whereas the
speed of a forced (rotational) vortex is zero at the center and increases proportional to the
distance from the center. Both types of vortices exhibit a pressure minimum at the center, though
the pressure minimum in a free vortex is much lower.
1 Properties
2 Dynamics
3 Two types of vortex
o 3.1 Free (irrotational) vortex
o 3.2 Forced (rotational) vortex
4 Vortices in magnets
5 Observations
o 5.1 Instances
6 See also
7 References and further reading
8 External links
[edit] Properties
Crow Instability contrail demonstrates vortex
Vortices display some special properties:
The fluid pressure in a vortex is lowest in the center and rises progressively with distance
in the fluid. (See Helmholtz's theorems.) Vortices readily deflect and attach themselves to
Two or more vortices that are approximately parallel and circulating in the same direction
opposite directions.
[edit] Dynamics
Mathematically, vorticity is defined as the curl of the fluid velocity :
[edit] Two types of vortex
In fluid mechanics, a distinction is often made between two limiting vortex cases. One is called
the free (irrotational) vortex, and the other is the forced (rotational) vortex. These are considered
as below:
Two autumn leaves in an
Two autumn leaves in a Two autumn leaves in a irrotational vortex preserve
counter-clockwise vortex rotational vortex rotate with their original orientation while
(reference position). the counter-clockwise flow. moving counter-clockwise.
[edit] Free (irrotational) vortex
When fluid is drawn down a plug-hole, one can observe the phenomenon of a free vortex or line
vortex. The tangential velocity v varies inversely as the distance r from the center of rotation, so
the angular momentum, rv, is constant; the vorticity is zero everywhere (except for a singularity
at the center-line) and the circulation about a contour containing r = 0 has the same value
everywhere. The free surface (if present) dips sharply (as r −2 ) as the center line is approached.
The tangential velocity is given by:
where Γ is the circulation and r is the radial distance from the center of the vortex.
In non-technical terms, the fluid near the center of the vortex circulates faster than the fluid far
from the center. The speed along the circular path of flow is held constant or decreases as you
move out from the center. At the same time the inner streamlines have a shorter distance to travel
to complete a ring. If you were running a race on a circular track would you rather be on the
inside or outside, assuming the goal was to complete a circle? Imagine a leaf floating in a free
vortex. The leaf's tip points to the center and the blade straddles multiple streamlines. The outer
flow is slow in terms of angle traversed and it exerts a backwards tug on the base of the leaf
while the faster inner flow pulls the tip forwards. The drag force opposes rotation of the leaf as it
moves around the circle.
[edit] Forced (rotational) vortex
In a forced vortex the fluid essentially rotates as a solid body (there is no shear). The motion can
be realized by placing a dish of fluid on a turntable rotating at ω radians/sec; the fluid has
vorticity of 2ω everywhere, and the free surface (if present) is a parabola.
The tangential velocity is given by:
where ω is the angular velocity and r is the radial distance from the center of the vortex.
[edit] Vortices in magnets
Different classes of vortex waves also exist in magnets. There are exact solutions to classical
nonlinear magnetic equations e.g. Landau-Lifshitz equation, continuum Heisenberg model,
Ishimori equation, nonlinear Schrödinger equation and so on.
[edit] Observations
circular current of water of conflicting tides often form vortex shapes. Turbulent flow makes
larger than a tornado). [2] On a much smaller scale, a vortex is usually formed as water goes down
water surface.
[edit] Instances
In the hydrodynamic interpretation of the behaviour of electromagnetic fields, the
slow rate at which viscosity dissipates the energy of a vortex.
Lift-induced drag of a wing on an aircraft.
The primary cause of drag in the sail of a sloop.
Maelstrom, Lofoten, Norway.
Hurricane : a much larger, swirling body of clouds produced by evaporating warm ocean
water and influenced by the Earth's rotation. Similar, but far greater, vortices are also
intermittent Great Dark Spot on Neptune.
Polar vortex : a persistent, large-scale cyclone centered near the Earth's poles, in the
middle and upper troposphere and the stratosphere.
Sunspot : dark region on the Sun's surface (photosphere) marked by a lower temperature
than its surroundings, and intense magnetic activity.
The accretion disk of a black hole or other massive gravitational source.
rotating disk. Earth's galaxy, the Milky Way, is of this type.
Shared By: |
82c70c9df475b3e7 | , ,
I’m an atheist for one simple reason: there’s no evidence for the existence of God. Any god, really. Simply put, this universe is too natural for there to be a god. If there was a god, we would know. We wouldn’t need faith, anymore than we need faith for the existence of things like rain or bacon. God would be as big a part of our lives as those bacon cupcakes that I’ve never gotten around to trying. Actually, he’d probably be bigger.
Bacon Cupcake
Are these things even good? Somebody tell me.
But the truth is, even though those cupcakes are a small part of my life, God is even smaller. Everywhere I look, what I see is not God. It shouldn’t work like that. God is supposed to be everywhere, but in reality, he’s nowhere. Now, obviously, absence of evidence is not automatically evidence of absence. But I think that when people claim that their god is everywhere, not finding that god everywhere is a very good sign that those people are wrong.
The universe is filled with mystery. Where did we come from? Why are we here? Why is The Jersey Shore so popular? There used to be many more mysteries. Why do people get sick? What causes lightning? If we put bacon on a cupcake, will people buy it? All of those questions have been answered. And (surprise!) all of those answers have turned out to be not God. In fact, so many answers have turned out to be “not God” that I’m confident that there are no questions for which the answer is “God”. (Except maybe the question, “What is one thing that doesn’t exist?” Bam!) Every question we answer, every place we look, we find “not God”.
But here’s the thing: let’s say that at some point, we find that the answer to a question is, in fact, God. Let’s say that for some reason, we find out that God really loves The Jersey Shore. (“And he hath appointed Snooki to be His prophet, and spread the Word.”) If we discovered proof that God exists, then I will most certainly believe in him. But it would take evidence, not faith, to convince me. Now would I worship him? Of course not. He likes The Jersey Shore, for one thing. Plus, there were those genocides….
But there is no evidence. There is absolutely no evidence that God exists. The Jersey Shore is still unexplained. I have yet to see a single physics equation that has God in it. (“Now if you distribute God through the Schrödinger equation…”) And more importantly, those equations still work fine. God isn’t needed for any of them. And that’s probably because he doesn’t exist. |
caba1043604adbdc | Category Archives: Quantum Quandaries
Quantum Times Book Reviews
Book Review
• Title: Computing With Quantum Cats: From Colossus To Qubits
• Author: John Gribbin
• Publisher: Bantam, 2013
• Author: Jonathan Dowling
• Publisher: CRC Press, 2013
theoretical computer science, at least to the point of understanding
computer. You then need to explain the subtle distinctions between
the same time. The disadvantage is that there is relatively little
space dedicated to the main topic of the book.
In order to weave the book together into a narrative, Gribbin
dedicates each chapter except the last to an individual prominent
with biography, making the book more accessible. The first two
sections on classical computing and quantum theory display Gribbin’s
time” and undue prominence being given to the many-worlds
interpretation apply, but no more than to any other popular treatment
of quantum theory. The explanations are otherwise very good. I
got from abstract Turing machines to modern day classical computers,
topics such as the circuit model and computational complexity in this
section. Instead these topics are squeezed in very briefly into the
quantum computing section, and Gribbin flubs the description of
computational complexity. For example, see if you can spot the
problems with the following three quotes:
category that mathematicians call `complexity class P’…”
The last chapter of Gribbin’s book is an tour of the proposed
experimental implementations of quantum computing and the success
quickly and is rather credulous about the prospects of each
technology. Gribbin also persists with the device of including potted
biographies of the main scientists involved. The total effect is like
running at high speed through an unfamiliar woods, while someone slaps
inclusion of such a detailed chapter was a mistake, especially since
Gribbin includes an epilogue about the controversial issue of discord
inclusion, which will either seem prescient or silly after the debate
well-established theory.
In summary, Gribbin’s has written a good popular book on quantum
unclear explanation in a few areas on the author’s part.
Dowling’s book is a different kettle of fish from Gribbin’s. He
claims to be aiming for the same audience of scientifically curious
know that popular science books written by physicists are really meant
this perspective, there is much valuable material in Dowling’s book.
Dowling is really on form when he is discussing his personal
the experimental implementation of quantum computing and other quantum
technologies. There is also a lot of material about the internal
machinations of military and intelligence funding agencies, which
applying for such funding. As you might expect, Dowling’s assessment
of the prospects of the various proposed technologies is much more
accurate and conservative than Gribbin’s. In particular his treatment
of the cautionary tale of NMR quantum computing is masterful and his
technologies beyond quantum computing and cryptography, such as
quantum metrology, which are often neglected in popular treatments.
of different topics. It starts with a debunking of David Kaiser’s
were instrumental in the development of quantum information via their
involvement in the no-cloning theorem. Dowling rightly points out
that the origins of quantum cryptography are independent of this,
really the person who made it OK for mainstream physicists to think
about the foundations of quantum theory again, and who encouraged his
students and postdocs to do so in information theoretic terms. Later
in the chapter, Dowling moves into extremely speculative territory,
artificial intelligence might be like. I disagree with about as much
entertaining nonetheless.
You may notice that I have avoided talking about the first few
positive things to say about them.
psychoanalysing Einstein. As usual in such treatments, there is a
thin caricature of Einstein’s actual views followed by a lot of
a passion, particularly since Einstein’s response to quantum theory
would have made of Bell’s theorem? Worse than this, Dowling’s
treatment perpetuates the common myth that determinism is one of the
consequences of these theorems, as Dowling maintains throughout the
analogy. Dowling insists on using a single analogy to cover
quite good for explaining classical common cause correlations,
explaining the use of modular arithmetic in Shor’s algorithm.
However, since Dowling has earlier placed such great emphasis on the
flat when describing entanglement in which we have to imagine that the
I think this is confusing and that a more abstract analogy,
e.g. colored balls in boxes, would have been better.
There are also a few places where Dowling makes flatly incorrect
also found Dowling’s criterion for when something should be called an
needed to generate entanglement from them.
For example, the tale of how funding for classical optical computing
dried up after Conway and Mead instigated VLSI design for silicon
There are also whole sections that are so tangentially related to the
string-theory rant in chapter six.
exposition. For example, in a rather silly analogy between Shor’s
algorithm and a fruitcake, the following occurs:
in the 1760s…”
unlikely that you know who Charlie Bennett and Dave Wineland are or
chapters will be utterly confusing. They are explained in the main
being too cute.
Despite these criticisms, I would still recommend Dowling’s book to
physicists and other academics with a professional interest in quantum
Quantum Times Article about Surveys on the Foundations of Quantum Theory
A new edition of The Quantum Times (newsletter of the APS topical group on Quantum Information) is out and I have two articles in it. I am posting the first one here today and the second, a book review of two recent books on quantum computing by John Gribbin and Jonathan Dowling, will be posted later in the week. As always, I encourage you to download the newsletter itself because it contains other interesting articles and announcements other than my own. In particlar, I would like to draw your attention to the fact that Ian Durham, current editor of The Quantum Times, is stepping down as editor at some point before the March meeting. If you are interested in getting more involved in the topical group, I would encourage you to put yourself forward. Details can be found at the end of the newsletter.
Upon reformatting my articles for the blog, I realized that I have reached almost Miguel Navascues levels of crankiness. I guess this might be because I had a stomach bug when I was writing them. Today’s article is a criticism of the recent “Snapshots of Foundational Attitudes Toward Quantum Mechanics” surveys that appeared on the arXiv and generated a lot of attention. The article is part of a point-counterpoint, with Nathan Harshman defending the surveys. Here, I am only posting my part in its original version. The newsletter version is slightly edited from this, most significantly in the removal of my carefully constructed title.
Lies, Damned Lies, and Snapshots of Foundational Attitudes Toward Quantum Mechanics
Q1. Which of the following questions is best resolved by taking a straw
poll of physicists attending a conference?
A. How long ago did the big bang happen?
B. What is the correct approach to quantum gravity?
C. Is nature supersymmetric?
D. What is the correct way to understand quantum theory?
E. None of the above.
By definition, a scientific question is one that is best resolved by
rational argument and appeal to empirical evidence. It does not
matter if definitive evidence is lacking, so long as it is conceivable
that evidence may become available in the future, possibly via
experiments that we have not conceived of yet. A poll is not a valid
method of resolving a scientific question. If you answered anything
other than E to the above question then you must think that at least
one of A-D is not a scientific question, and the most likely culprit
is D. If so, I disagree with you.
It is possible to legitimately disagree on whether a question is
scientific. Our imaginations cannot conceive of all possible ways,
however indirect, that a question might get resolved. The lesson from
history is that we are often wrong in declaring questions beyond the
reach of science. For example, when big bang cosmology was first
introduced, many viewed it as unscientific because it was difficult to
conceive of how its predictions might be verified from our lowly
position here on Earth. We have since gone from a situation in which
many people thought that the steady state model could not be
definitively refuted, to a big bang consensus with wildly fluctuating
estimates of the age of the universe, and finally to a precision value
of 13.77 +/- 0.059 billion years from the WMAP data.
Traditionally, many physicists separated quantum theory into its
“practical part” and its “interpretation”, with the latter viewed as
more a matter of philosophy than physics. John Bell refuted this by
showing that conceptual issues have experimental consequences. The
more recent development of quantum information and computation also
shows the practical value of foundational thinking. Despite these
developments, the view that “interpretation” is a separate
unscientific subject persists. Partly this is because we have a
tendency to redraw the boundaries. “Interpretation” is then a
catch-all term for the issues we cannot resolve, such as whether
Copenhagen, Bohmian mechanics, many-worlds, or something else is the
best way of looking at quantum theory. However, the lesson of big
bang cosmology cautions against labelling these issues unscientific.
Although interpretations of quantum theory are constructed to yield
the same or similar enough predictions to standard quantum theory,
this need not be the case when we move beyond the experimental regime
that is now accessible. Each interpretation is based on a different
explanatory framework, and each suggests different ways of modifying
or generalizing the theory. If we think that quantum theory is not
our final theory then interpretations are relevant in constructing its
successor. This may happen in quantum gravity, but it may equally
happen at lower energies, since we do not yet have an experimentally
confirmed theory that unifies the other three forces. The need to
change quantum theory may happen sooner than you expect, and whichever
explanatory framework yields the next theory will then be proven
correct. It is for this reason that I think question D is scientific.
Regardless of the status of question D, straw polls, such as the three
that recently appeared on the arXiv [1-3], cannot help us to resolve
it, and I find it puzzling that we choose to conduct them for this
question, but not for other controversial issues in physics. Even
during the decades in which the status of big bang cosmology was
controversial, I know of no attempts to poll cosmologists’ views on
it. Such a poll would have been viewed as meaningless by those who
thought cosmology was unscientific, and as the wrong way to resolve
the question by those who did think it was scientific. The same is
true of question D, and the fact that we do nevertheless conduct polls
suggests that the question is not being treated with the same respect
as the others on the list.
Admittedly, polls about controversial scientific questions are
relevant to the sociology of science, and they might be useful to the
beginning graduate student who is more concerned with their career
prospects than following their own rational instincts. From this
perspective, it would be just as interesting to know what percentage
of physicists think that supersymmetry is on the right track as it is
to know about their views on quantum theory. However, to answer such
questions, polls need careful design and statistical analysis. None
of the three polls claims to be scientific and none of them contain
any error analysis. What then is the point of them?
The three recent polls are based on a set of questions designed by
Schlosshauer, Kofler and Zeilinger, who conducted the first poll at a
conference organized by Zeilinger [1]. The questions go beyond just
asking for a preferred interpretation of quantum theory, but in the
interests of brevity I will focus on this aspect alone. In the
Schlosshauer et al. poll, Copenhagen comes out top, closely followed
by “information-based/information-theoretical” interpretations. The
second comes from a conference called “The Philosophy of Quantum
Mechanics” [2]. There was a larger proportion of self-identified
philosophers amongst those surveyed and “I have no preferred
interpretation” came out as the clear winner, not so closely followed
by de Broglie-Bohm theory, which had obtained zero votes in the poll
of Schlosshauer et al. Copenhagen is in joint third place along with
objective collapse theories. The third poll comes from “Quantum
theory without observers III” [3], at which de Broglie-Bohm got a
whopping 63% of the votes, not so closely followed by objective
What we can conclude from this is that people who went to a meeting
organized by Zeilinger are likely to have views similar to Zeilinger.
People who went to a philosophy conference are less likely to be
committed, but are much more likely to pick a realist interpretation
than those who hang out with Zeilinger. Finally, people who went to a
meeting that is mainly about de Broglie-Bohm theory, organized by the
world’s most prominent Bohmians, are likely to be Bohmians. What have
we learned from this that we did not know already?
One thing I find especially amusing about these polls is how easy it
would have been to obtain a more representative sample of physicists’
views. It is straightforward to post a survey on the internet for
free. Then all you have to do is write a letter to Physics Today
asking people to complete the survey and send the URL to a bunch of
mailing lists. The sample so obtained would still be self-selecting
to some degree, but much less so than at a conference dedicated to
some particular approach to quantum theory. The sample would also be
larger by at least an order of magnitude. The ease with which this
could be done only illustrates the extent to which these surveys
should not even be taken semi-seriously.
I could go on about the bad design of the survey questions and about
how the error bars would be huge if you actually bothered to calculate
them. It is amusing how willing scientists are to abandon the
scientific method when they address questions outside their own field.
However, I think I have taken up enough of your time already. It is
time we recognized these surveys for the nonsense that they are.
[1] M. Schlosshauer, J. Kofler and A. Zeilinger, A Snapshot of
Foundational Attitudes Toward Quantum Mechanics, arXiv:1301.1069
[2] C. Sommer, Another Survey of Foundational Attitudes Towards
Quantum Mechanics, arXiv:1303.2719 (2013).
[3] T. Norsen and S. Nelson, Yet Another Snapshot of Foundational
Attitudes Toward Quantum Mechanics, arXiv:1306.4646 (2013).
FQXi Essay Contest
Quantum Times Article on the PBR Theorem
I recently wrote an article (pdf) for The Quantum Times (Newsletter of the APS Topical Group on Quantum Information) about the PBR theorem. There is some overlap with my previous blog post, but the newsletter article focuses more on the implications of the PBR result, rather than the result itself. Therefore, I thought it would be worth reproducing it here. Quantum types should still download the original newsletter, as it contains many other interesting things, including an article by Charlie Bennett on logical depth (which he has also reproduced over at The Quantum Pontiff). APS members should also join the TGQI, and if you are at the March meeting this week, you should check out some of the interesting sessions they have organized.
Note: Due to the appearance of this paper, I would weaken some of the statements in this article if I were writing it again. The results of the paper imply that the factorization assumption is essential to obtain the PBR result, so this is an additional assumption that needs to be made if you want to prove things like Bell’s theorem directly from psi-ontology rather than using the traditional approach. When I wrote the article, I was optimistic that a proof of the PBR theorem that does not require factorization could be found, in which case teaching PBR first and then deriving other results like Bell as a consequence would have been an attractive pedagogical option. However, due to the necessity for stronger assumptions, I no longer think this.
OK, without further ado, here is the article.
PBR, EPR, and all that jazz
In the past couple of months, the quantum foundations world has been abuzz about a new preprint entitled “The Quantum State Cannot be Interpreted Statistically” by Matt Pusey, Jon Barrett and Terry Rudolph (henceforth known as PBR). Since I wrote a blog post explaining the result, I have been inundated with more correspondence from scientists and more requests for comment from science journalists than at any other point in my career. Reaction to the result amongst quantum researchers has been mixed, with many people reacting negatively to the title, which can be misinterpreted as an attack on the Born rule. Others have managed to read past the title, but are still unsure whether to credit the result with any fundamental significance. In this article, I would like to explain why I think that the PBR result is the most significant constraint on hidden variable theories that has been proved to date. It provides a simple proof of many other known theorems, and it supercharges the EPR argument, converting it into a rigorous proof of nonlocality that has the same status as Bell’s theorem. Before getting to this though, we need to understand the PBR result itself.
What are Quantum States?
One of the most debated issues in the foundations of quantum theory is the status of the quantum state. On the ontic view, quantum states represent a real property of quantum systems, somewhat akin to a physical field, albeit one with extremely bizarre properties like entanglement. The alternative to this is the epistemic view, which sees quantum states as states of knowledge, more akin to the probability distributions of statistical mechanics. A psi-ontologist
(as supporters of the ontic view have been dubbed by Chris Granade) might point to the phenomenon of interference in support of their view, and also to the fact that pretty much all viable realist interpretations of quantum theory, such as many-worlds or Bohmian mechanics, include an ontic state. The key argument in favor of the epistemic view is that it dissolves the measurement problem, since the fact that states undergo a discontinuous change in the light of measurement results does not then imply the existence of any real physical process. Instead, the collapse of the wavefunction is more akin to the way that classical probability distributions get updated by Bayesian conditioning in the light of new data.
Many people who advocate a psi-epistemic view also adopt an anti-realist or neo-Copenhagen point of view on quantum theory in which the quantum state does not represent knowledge about some underlying reality, but rather it only represents knowledge about the consequences of measurements that we might make on the system. However, there remained the nagging question of whether it is possible in principle to construct a realist interpretation of quantum theory that is also psi-epistemic, or whether the realist is compelled to think that quantum states are real. PBR have answered this question in the negative, at least within the standard framework for hidden variable theories that we use for other no go results such as Bell’s theorem. As with Bell’s theorem, there are loopholes, so it is better to say that PBR have placed a strong constraint on realist psi-epistemic interpretations, rather than ruling them out entirely.
The PBR Result
To properly formulate the result, we need to know a bit about how quantum states are represented in a hidden variable theory. In such a theory, quantum systems are assumed to have real pre-existing properties that are responsible for determining what happens when we make a measurement. A full specification of these properties is what we mean by an ontic state of the system. In general, we don’t have precise control over the ontic state so a quantum state corresponds to a probability distribution over the ontic states. This framework is illustrated below.
Representation of a quantum state in an ontic model
In an ontic model, a quantum state (indicated heuristically on the left as a vector in the Bloch sphere) is represented by a probability distribution over ontic states, as indicated on the right.
A hidden variable theory is psi-ontic if knowing the ontic state of the system allows you to determine the (pure) quantum state that was prepared uniquely. Equivalently, the probability distributions corresponding to two distinct pure states do not overlap. This is illustrated below.
Psi-ontic model
Representation of a pair of quantum states in a psi-ontic model
A hidden variable theory is psi-epistemic if it is not psi-ontic, i.e. there must exist an ontic state that is possible for more than one pure state, or, in other words, there must exist two nonorthogonal pure states with corresponding distributions that overlap. This is illustrated below.
Psi-epistemic model
Representation of nonorthogonal states in a psi-epistemic model
These definitions of psi-ontology and psi-epistemicism may seem a little abstract, so a classical analogy may be helpful. In Newtonian mechanics the ontic state of a particle is a point in phase space, i.e. a specification of its position and momentum. Other ontic properties of the particle, such as its energy, are given by functions of the phase space point, i.e. they are uniquely determined by the ontic state. Likewise, in a hidden variable theory, anything that is a unique function of the ontic state should be regarded as an ontic property of the system, and this applies to the quantum state in a psi-ontic model. The definition of a psi-epistemic model as the negation of this is very weak, e.g. it could still be the case that most ontic states are only possible in one quantum state and just a few are compatible with more than one. Nonetheless, even this very weak notion is ruled out by PBR.
The proof of the PBR result is quite simple, but I will not review it here because it is summarized in my blog post and the original paper is also very readable. Instead, I want to focus on its implications.
Size of the Ontic State Space
A trivial consequence of the PBR result is that the cardinality of the ontic state space of any hidden variable theory, even for just a qubit, must be infinite, in fact continuously so. This is because there must be at least one ontic state for each quantum state, and there are a continuous infinity of the latter. The fact that there must be infinite ontic states was previously proved by Lucien Hardy under the name “Ontological Excess Baggage theorem”, but we can now
view it as a corollary of PBR. If you think about it, this property is quite surprising because we can only extract one or two bits from a qubit (depending on whether we count superdense coding) so it would be natural to assume that a hidden variable state could be specified by a finite amount of information.
Hidden variable theories provide one possible method of simulating a quantum computer on a classical computer by simply tracking the value of the ontic state at each stage in the computation. This enables us to sample from the probability distribution of any quantum measurement at any point during the computation. Another method is to simply store a representation of the quantum state at each point in time. This second method is clearly inefficient, as the number of parameters required to specify a quantum state grows exponentially with the number of qubits. The PBR theorem tells us that the hidden variable method cannot be any better, as it requires an ontic state space that is at least as big as the set of quantum states. This conclusion was previously drawn by Alberto Montina using different methods, but again it now becomes a corollary of PBR. This result falls short of saying that any classical simulation of a quantum computer must have exponential space complexity, since we usually only have to simulate the outcome of one fixed measurement at the end of the computation and our simulation does not have to track the slice-by-slice causal evolution of the quantum circuit. Indeed, pretty much the first nontrivial result in quantum computational complexity theory, proved by Bernstein and Vazirani, showed that quantum circuits can be simulated with polynomial memory resources. Nevertheless, this result does reaffirm that we need to go beyond slice-by-slice simulations of quantum circuits in looking for efficient classical algorithms.
Supercharged EPR Argument
As emphasized by Harrigan and Spekkens, a variant of the EPR argument favoured by Einstein shows that any psi-ontic hidden variable theory must be nonlocal. Thus, prior to Bell’s theorem, the only open possibility for a local hidden variable theory was a psi-epistemic theory. Of course, Bell’s theorem rules out all local hidden variable theories, regardless of the status of the quantum state within them. Nevertheless, the PBR result now gives an arguably simpler route to the same conclusion by ruling out psi-epistemic theories, allowing us to infer nonlocality directly from EPR.
A sketch of the argument runs as follows. Consider a pair of qubits in the singlet state. When one of the qubits is measured in an orthonormal basis, the other qubit collapses to one of two orthogonal pure states. By varying the basis that the first qubit is measured in, the second qubit can be made to collapse in any basis we like (a phenomenon that Schroedinger called “steering”). If we restrict attention to two possible choices of measurement basis, then there are
four possible pure states that the second qubit might end up in. The PBR result implies that the sets of possible ontic states for the second system for each of these pure states must be disjoint. Consequently, the sets of possible ontic states corresponding to the two distinct choices of basis are also disjoint. Thus, the ontic state of the second system must depend on the choice of measurement made on the first system and this implies nonlocality because I can decide which measurement to perform on the first system at spacelike separation from the second.
PBR as a proto-theorem
We have seen that the PBR result can be used to establish some known constraints on hidden variable theories in a very straightforward way. There is more to this story that I can possibly fit into this article, and I suspect that every major no-go result for hidden variable theories may fall under the rubric of PBR. Thus, even if you don’t care a fig about fancy distinctions between ontic and epistemic states, it is still worth devoting a few braincells to the PBR result. I predict that it will become viewed as the basic result about hidden variable theories, and that we will end up teaching it to our students even before such stalwarts as Bell’s theorem and Kochen-Specker.
Further Reading
For further details of the PBR theorem see:
For constraints on the size of the ontic state space see:
For the early quantum computational complexity results see:
For a fully rigorous version of the PBR+EPR nonlocality argument see:
Can the quantum state be interpreted statistically?
A new preprint entitled The Quantum State Cannot be Interpreted Statistically by Pusey, Barrett and Rudolph (henceforth known as PBR) has been generating a significant amount of buzz in the last couple of days. Nature posted an article about it on their website, Scott Aaronson and Lubos Motl blogged about it, and I have been seeing a lot of commentary about it on Twitter and Google+. In this post, I am going to explain the background to this theorem and outline exactly what it entails for the interpretation of the quantum state. I am not going to explain the technicalities in great detail, since these are explained very clearly in the paper itself. The main aim is to clear up misconceptions.
First up, I would like to say that I find the use of the word “Statistically” in the title to be a rather unfortunate choice. It is liable to make people think that the authors are arguing against the Born rule (Lubos Motl has fallen into this trap in particular), whereas in fact the opposite is true. The result is all about reproducing the Born rule within a realist theory. The question is whether a scientific realist can interpret the quantum state as an epistemic state (state of knowledge) or whether it must be an ontic state (state of reality). It seems to show that only the ontic interpretation is viable, but, in my view, this is a bit too quick. On careful analysis, it does not really rule out any of the positions that are advocated by contemporary researchers in quantum foundations. However, it does answer an important question that was previously open, and confirms an intuition that many of us already held. Before going into more detail, I also want to say that I regard this as the most important result in quantum foundations in the past couple of years, well deserving of a good amount of hype if anything is. I am not sure I would go as far as Antony Valentini, who is quoted in the Nature article saying that it is the most important result since Bell’s theorem, or David Wallace, who says that it is the most significant result he has seen in his career. Of course, these two are likely to be very happy about the result, since they already subscribe to interpretations of quantum theory in which the quantum state is ontic (de Broglie-Bohm theory and many-worlds respectively) and perhaps they believe that it poses more of a dilemma for epistemicists like myself then it actually does.
Classical Ontic States
Before explaining the result itself, it is important to be clear on what all this epistemic/ontic state business is all about and why it matters. It is easiest to introduce the distinction via a classical example, for which the interpretation of states is clear. Therefore, consider the Newtonian dynamics of a single point particle in one dimension. The trajectory of the particle can be determined by specifying initial conditions, which in this case consists of a position \(x(t_0)\) and momentum \(p(t_0)\) at some initial time \(t_0\). These specify a point in the particle’s phase space, which consists of all possible pairs \((x,p)\) of positions and momenta.
Classical Ontic State
The ontic state space for a single classical particle, with the initial ontic state marked.
Then, assuming we know all the relevant forces, we can compute the position and momentum \((x(t),p(t))\) at some other time \(t\) using Newton’s laws or, equivalently, Hamilton’s equations. At any time \(t\), the phase space point \((x(t),p(t))\) can be thought of as the instantaneous state of the particle. It is clearly an ontic state (state of reality), since the particle either does or does not possess that particular position and momentum, independently of whether we know that it possesses those values[1]. The same goes for more complicated systems, such as multiparticle systems and fields. In all cases, I can derive a phase space consisting of configurations and generalized momenta. This is the space of ontic states for any classical system.
Classical Epistemic States
Although the description of classical mechanics in terms of ontic phase space trajectories is clear and unambiguous, we are often, indeed usually, more interested in tracking what we know about a system. For example, in statistical mechanics, we may only know some macroscopic properties of a large collection of systems, such as pressure or temperature. We are interested in how these quantities change over time, and there are many different possible microscopic trajectories that are compatible with this. Generally speaking, our knowledge about a classical system is determined by assigning a probability distribution over phase space, which represents our uncertainty about the actual point occupied by the system.
A classical epistemic state
An epistemic state of a single classical particles. The ellipses represent contour lines of constant probability.
We can track how this probability distribution changes using Liouville’s equation, which is derived by applying Hamilton’s equations weighted with the probability assigned to each phase space point. The probability distribution is pretty clearly an epistemic state. The actual system only occupies one phase space point and does not care what probability we have assigned to it. Crucially, the ontic state occupied by the system would be regarded as possible by us in more than one probability distribution, in fact it is compatible with infinitely many.
Overlapping epistemic states
Epistemic states can overlap, so each ontic state is possible in more than one epistemic state. In this diagram, the two phase space axes have been schematically compressed into one, so that we can sketch the probability density graphs of epistemic states. The ontic state marked with a cross is possible in both epistemic states sketched on the graph.
Quantum States
We have seen that there are two clear notions of state in classical mechanics: ontic states (phase space points) and epistemic states (probability distributions over the ontic states). In quantum theory, we have a different notion of state — the wavefunction — and the question is: should we think of it as an ontic state (more like a phase space point), an epistemic state (more like a probability distribution), or something else entirely?
Here are three possible answers to this question:
1. Wavefunctions are epistemic and there is some underlying ontic state. Quantum mechanics is the statistical theory of these ontic states in analogy with Liouville mechanics.
2. Wavefunctions are epistemic, but there is no deeper underlying reality.
3. Wavefunctions are ontic (there may also be additional ontic degrees of freedom, which is an important distinction but not relevant to the present discussion).
I will call options 1 and 2 psi-epistemic and option 3 psi-ontic. Advocates of option 3 are called psi-ontologists, in an intentional pun coined by Chris Granade. Options 1 and 3 share a conviction of scientific realism, which is the idea that there must be some description of what is going on in reality that is independent of our knowledge of it. Option 2 is broadly anti-realist, although there can be some subtleties here[2].
The theorem in the paper attempts to rule out option 1, which would mean that scientific realists should become psi-ontologists. I am pretty sure that no theorem on Earth could rule out option 2, so that is always a refuge for psi-epistemicists, at least if their psi-epistemic conviction is stronger than their realist one.
I would classify the Copenhagen interpretation, as represented by Niels Bohr[3], under option 2. One of his famous quotes is:
and “what we can say” certainly seems to imply that we are talking about our knowledge of reality rather than reality itself. Various contemporary neo-Copenhagen approaches also fall under this option, e.g. the Quantum Bayesianism of Carlton Caves, Chris Fuchs and Ruediger Schack; Anton Zeilinger’s idea that quantum physics is only about information; and the view presently advocated by the philosopher Jeff Bub. These views are safe from refutation by the PBR theorem, although one may debate whether they are desirable on other grounds, e.g. the accusation of instrumentalism.
Pretty much all of the well-developed interpretations that take a realist stance fall under option 3, so they are in the psi-ontic camp. This includes the Everett/many-worlds interpretation, de Broglie-Bohm theory, and spontaneous collapse models. Advocates of these approaches are likely to rejoice at the PBR result, as it apparently rules out their only realist competition, and they are unlikely to regard anti-realist approaches as viable.
Perhaps the best known contemporary advocate of option 1 is Rob Spekkens, but I also include myself and Terry Rudolph (one of the authors of the paper) in this camp. Rob gives a fairly convincing argument that option 1 characterizes Einstein’s views in this paper, which also gives a lot of technical background on the distinction between options 1 and 2.
Why be a psi-epistemicist?
Why should the epistemic view of the quantum state should be taken seriously in the first place, at least seriously enough to prove a theorem about it? The most naive argument is that, generically, quantum states only predict probabilities for observables rather than definite values. In this sense, they are unlike classical phase space points, which determine the values of all observables uniquely. However, this argument is not compelling because determinism is not the real issue here. We can allow there to be some genuine stochasticity in nature whilst still maintaining realism.
An argument that I personally find motivating is that quantum theory can be viewed as a noncommutative generalization of classical probability theory, as was first pointed out by von Neumann. My own exposition of this idea is contained in this paper. Even if we don’t always realize it, we are always using this idea whenever we generalize a result from classical to quantum information theory. The idea is so useful, i.e. it has such great explanatory power, that it would be very puzzling if it were a mere accident, but it does appear to be just an accident in most psi-ontic interpretations of quantum theory. For example, try to think about why quantum theory should be formally a generalization of probability theory from a many-worlds point of view. Nevertheless, this argument may not be compelling to everyone, since it mainly entails that mixed states have to be epistemic. Classically, the pure states are the extremal probability distributions, i.e. they are just delta functions on a single ontic state. Thus, they are in one-to-one correspondence with the ontic states. The same could be true of pure quantum states without ruining the analogy[5].
A more convincing argument concerns the instantaneous change that occurs after a measurement — the collapse of the wavefunction. When we acquire new information about a classical epistemic state (probability distribution) say by measuring the position of a particle, it also undergoes an instantaneous change. All the weight we assigned to phase space points that have positions that differ from the measured value is rescaled to zero and the rest of the probability distribution is renormalized. This is just Bayesian conditioning. It represents a change in our knowledge about the system, but no change to the system itself. It is still occupying the same phase space point as it was before, so there is no change to the ontic state of the system. If the quantum state is epistemic, then instantaneous changes upon measurement are unproblematic, having a similar status to Bayesian conditioning. Therefore, the measurement problem is completely dissolved within this approach.
Finally, if we allow a more sophisticated analogy between quantum states and probabilities, in particular by allowing constraints on how much may be known and allowing measurements to locally disturb the ontic state, then we can qualitatively explain a large number of phenomena that are puzzing for a psi-ontologist very simply within a psi-epistemic approach. These include: teleportation, superdense coding, and much of the rest of quantum information theory. Crucially, it also includes interference, which is often held as a convincing reason for psi-ontology. This was demonstrated in a very convincing way by Rob Spekkens via a toy theory, which is recommended reading for all those interested in quantum foundations. In fact, since this paper contains the most compelling reasons for being a psi-epistemicist, you should definitely make sure you read it so that you can be more shocked by the PBR result.
Ontic models
If we accept that the psi-epistemic position is reasonable, then it would be superficially resonable to pick option 1 and try to maintain scientific realism. This leads us into the realm of ontic models for quantum theory, otherwise known as hidden variable theories[6]. A pretty standard framework for discussing such models has existed since John Bell’s work in the 1960’s, and almost everyone adopts the same definitions that were laid down then. The basic idea is that systems have properties. There is some space \(\Lambda\) of ontic states, analogous to the phase space of a classical theory, and the system has a value \(\lambda \in \Lambda\) that specifies all its properties, analogous to the phase space points. When we prepare a system in some quantum state \(\Ket{\psi}\) in the lab, what is really happening is that an ontic state \(\lambda\) is sampled from a probability distribution over \(\mu(\lambda)\) that depends on \(\Ket{\psi}\).
Representation of a quantum state in an ontic model
We also need to know how to represent measurements in the model[7]. For each possible measurement that we could make on the system, the model must specify the outcome probabilities for each possible ontic state. Note that we are not assuming determinism here. The measurement is allowed to be stochastic even given a full specification of the ontic state. Thus, for each measurement \(M\), we need a set of functions \(\xi^M_k(\lambda)\) , where \(k\) labels the outcome. \(\xi^M_k(\lambda)\) is the probability of obtaining outcome \(k\) in a measurement of \(M\) when the ontic state is \(\lambda\). In order for these probabilities to be well defined the functions \(\xi^M_k\) must be positive and they must satisfy \(\sum_k \xi^M_k(\lambda) = 1\) for all \(\lambda \in \Lambda\). This normalization condition is very important in the proof of the PBR theorem, so please memorize it now.
Overall, the probability of obtaining outcome \(k\) in a measurement of \(M\) when the system is prepared in state \(\Ket{\psi}\) is given by
\[\mbox{Prob}(k|M,\Ket{\psi}) = \int_{\Lambda} \xi^M_k(\lambda) \mu(\lambda) d\lambda, \]
which is just the average of the outcome probabilities over the ontic state space.
If the model is going to reproduce the predictions of quantum theory, then these probabilities must match the Born rule. Suppose that the \(k\)th outcome of \(M\) corresponds to the projector \(P_k\). Then, this condition boils down to
\[\Bra{\psi} P_k \Ket{\psi} = \int_{\Lambda} \xi^M_k(\lambda) \mu(\lambda) d\lambda,\]
and this must hold for all quantum states, and all outcomes of all possible measurements.
Constraints on Ontic Models
Even disregarding the PBR paper, we already know that ontic models expressible in this framework have to have a number of undesirable properties. Bell’s theorem implies that they have to be nonlocal, which is not great if we want to maintain Lorentz invariance, and the Kochen-Specker theorem implies that they have to be contextual. Further, Lucien Hardy’s ontological excess baggage theorem shows that the ontic state space for even a qubit would have to have infinite cardinality. Following this, Montina proved a series of results, which culminated in the claim that there would have to be an object satisfying the Schrödinger equation present within the ontic state (see this paper). This latter result is close to the implication of the PBR theorem itself.
Given these constraints, it is perhaps not surprising that most psi-epistemicists have already opted for option 2, denouncing scientific realism entirely. Those of us who cling to realism have mostly decided that the ontic state must be a different type of object than it is in the framework described above. We could discard the idea that individual systems have well-defined properties, or the idea that the probabilities that we assign to those properties should depend only on the quantum state. Spekkens advocates the first possibility, arguing that only relational properties are ontic. On the other hand, I, following Huw Price, am partial to the idea of epistemic hidden variable theories with retrocausal influences, in which case the probability distributions over ontic states would depend on measurement choices as well as which quantum state is prepared. Neither of these possibilities are ruled out by the previous results, and they are not ruled out by PBR either. This is why I say that their result does not rule out any position that is seriously held by any researchers in quantum foundations. Nevertheless, until the PBR paper, there remained the question of whether a conventional psi-epistemic model was possible even in principle. Such a theory could at least have been a competitor to Bohmian mechanics. This possibility has now been ruled out fairly convincingly, and so we now turn to the basic idea of their result.
The Result
Recall from our classical example that each ontic state (phase space point) occurs in the support of more than one epistemic state (Liouville distribution), in fact infinitely many. This is just because probability distributions can have overlapping support. Now, consider what would happen if we restricted the theory to only allow epistemic states with disjoint support. For example, we could partition phase space into a number of disjoint cells and only consider probability distributions that are uniform over one cell and zero everywhere else.
Restricted classical theory
A restricted classical theory in which only the distributions indicated are allowed as epistemic states. In this case, each ontic state is only possible in one epistemic state, so it is more accurate to say that the epistemic states represent a property of the ontic state.
Given this restriction, the ontic state determines the epistemic state uniquely. If someone tells you the ontic state, then you know which cell it is in, so you know what the epistemic state must be. Therefore, in this restricted theory, the epistemic state is not really epistemic. Its image is contained in the ontic state, and it would be better to say that we were talking about a property of the ontic state, rather than something that represents knowledge. According to the PBR result, this is exactly what must happen in any ontic model of quantum theory within the Bell framework.
Here is the analog of this in ontic models of quantum theory. Suppose that two nonorthogonal quantum states \(\Ket{\psi_1}\) and \(\Ket{\psi_2}\) are represented as follows in an ontic model:
Psi-epistemic model
Representation of nonorthogonal states in a psi-epistemic model
Because the distributions overlap, there are ontic states that are compatible with more than one quantum states, so this is a psi-epistemic model.
In contrast, if, for every pair of quantum states \(\Ket{\psi_1},\Ket{\psi_2}\), the probability distributions do not overlap, i.e. the representation of each pair looks like this
Psi-ontic model
then the quantum state is uniquely determined by the ontic state, and it is therefore better regarded as a property of \(\lambda\) rather than a representation of knowledge. Such a model is psi-ontic. The PBR theorem states that all ontic models that reproduce the Born rule must be psi-ontic.
Sketch of the proof
In order to establish the result, PBR make use of the following idea. In an ontic model, the ontic state \(\lambda\) determines the probabilities for the outcomes of any possible measurement via the functions \(\xi^M_k\). The Born rule probabilities must be obtained by averaging these conditional probabilities with respect to the probability distribution \(\mu(\lambda)\) representing the quantum state. Suppose there is some measurement \(M\) that has an outcome \(k\) to which the quantum state \(\Ket{\psi}\) assigns probability zero according to the Born rule. Then, it must be the case that \(\xi^M_k(\lambda) = 0\) for every \(\lambda\) in the support of \(\mu(\lambda)\). Now consider two quantum states \(\Ket{\psi_1}\) and \(\Ket{\psi_2}\) and suppose that we can find a two outcome measurement such that that the first state gives zero Born rule probability to the first outcome and the second state gives zero Born rule probability to the second outcome. Suppose also that there is some \(\lambda\) that is in the support of both the distributions, \(\mu_1\) and \(\mu_2\), that represent \(\Ket{\psi_1}\) and \(\Ket{\psi_2}\) in the ontic model. Then, we must have \(\xi^M_1(\lambda) = \xi^M_2(\lambda) = 0\), which contradicts the normalization assumption \(\xi^M_1(\lambda) + \xi^M_2(\lambda) = 1\).
Now, it is fairly easy to see that there is no such measurement for a pair of nonorthogonal states, because this would mean that they could be distinguished with certainty, so we do not have a result quite yet. The trick to get around this is to consider multiple copies. Consider then, the four states \(\Ket{\psi_1}\otimes\Ket{\psi_1}, \Ket{\psi_1}\otimes\Ket{\psi_2}, \Ket{\psi_2}\otimes\Ket{\psi_1}\) and \(\Ket{\psi_2}\otimes\Ket{\psi_2}\) and suppose that there is a four outcome measurement such that \(\Ket{\psi_1}\otimes\Ket{\psi_1}\) gives zero probability to the first outcome, \(\Ket{\psi_1}\otimes\Ket{\psi_2}\) gives zero probability to the second outcome, and so on. In addition to this, we make an independence assumption that the probability distributions representing these four states must satisfy. Let \(\lambda\) be the ontic state of the first system and let \(\lambda’\) be the ontic state of the second. The independence assumption states that the probability densities representing the four quantum states in the ontic model are \(\mu_1(\lambda)\mu_1(\lambda’), \mu_1(\lambda)\mu_2(\lambda’), \mu_2(\lambda)\mu_1(\lambda’)\) and \(\mu_2(\lambda)\mu_2(\lambda’)\). This is a reasonable assumption because there is no entanglement between the two systems and we could do completely independent experiments on each of them. Assuming there is an ontic state \(\lambda\) in the support of both \(\mu_1\) and \(\mu_2\), there will be some nonzero probability that both systems occupy this ontic state whenever any of the four states are prepared. But, in this case, all four functions \(\xi^M_1,\xi^M_2,\xi^M_3\) and \(\xi^M_4\) must have value zero when both systems are in this state, which contradicts the normalization \(\sum_k \xi^M_k = 1\).
This argument works for the pair of states \(\Ket{\psi_1} = \Ket{0}\) and \(\Ket{\psi_2} = \Ket{+} = \frac{1}{\sqrt{2}} \left ( \Ket{0} + \Ket{1}\right )\). In this case, the four outcome measurement is a measurement in the basis:
\[\Ket{\phi_1} = \frac{1}{\sqrt{2}} \left ( \Ket{0}\otimes\Ket{1} + \Ket{1} \otimes \Ket{0} \right )\]
\[\Ket{\phi_2} = \frac{1}{\sqrt{2}} \left ( \Ket{0}\otimes\Ket{-} + \Ket{1} \otimes \Ket{+} \right )\]
\[\Ket{\phi_3} = \frac{1}{\sqrt{2}} \left ( \Ket{+}\otimes\Ket{1} + \Ket{-} \otimes \Ket{0} \right )\]
\[\Ket{\phi_4} = \frac{1}{\sqrt{2}} \left ( \Ket{+}\otimes\Ket{-} + \Ket{-} \otimes \Ket{+} \right ),\]
where \(\Ket{-} = \frac{1}{\sqrt{2}} \left ( \Ket{0} – \Ket{1}\right )\). It is easy to check that \(\Ket{\phi_1}\) is orthogonal to \(\Ket{0}\otimes\Ket{0}\), \(\Ket{\phi_2}\) is orthogonal to \(\Ket{0}\otimes\Ket{+}\), \(\Ket{\phi_3}\) is orthogonal to \(\Ket{+}\otimes\Ket{0}\), and \(\Ket{\phi_4}\) is orthogonal to \(\Ket{+}\otimes\Ket{+}\). Therefore, the argument applies and there can be no overlap in the probability distributions representing \(\Ket{0}\) and \(\Ket{+}\) in the model.
To establish psi-ontology, we need a similar argument for every pair of states \(\Ket{\psi_1}\) and \(\Ket{\psi_2}\). PBR establish that such an argument can always be made, but the general case is more complicated and requires more than two copies of the system. I refer you to the paper for details where it is explained very clearly.
The PBR theorem rules out psi-epistemic models within the standard Bell framework for ontological models. The remaining options are to adopt psi-ontology, remain psi-epistemic and abandon realism, or remain psi-epistemic and abandon the Bell framework. One of the things that a good interpretation of a physical theory should have is explanatory power. For me, the epistemic view of quantum states is so explanatory that it is worth trying to preserve it. Realism too is something that we should not abandon too hastily. Therefore, it seems to me that we should be questioning the assumptions of the Bell framework by allowing more general ontologies, perhaps involving relational or retrocausal degrees of freedom. At the very least, this option is the path less travelled, so we might learn something by exploring it more thoroughly.
1. There are actually subtleties about whether we should think of phase space points as instantaneous ontic states. For one thing, the momentum depends on the first derivative of position, so maybe we should really think of the state being defined on an infinitesimal time interval. Secondly, the fact that momentum appears is because Newtonian mechanics is defined by second order differential equations. If it were higher order then we would have to include variables depending on higher derivatives in our definition of phase space. This is bad if you believe in a clean separation between basic ontology and physical laws. To avoid this, one could define the ontic state to be the position only, i.e. a point in configuration space, and have the boundary conditions specified by the position of the particle at two different times. Alternatively, one might regard the entire spacetime trajectory of the particle as the ontic state, and regard the Newtonian laws themselves as a mere pattern in the space of possible trajectories. Of course, all these descriptions are mathematically equivalent, but they are conceptually quite different and they lead to different intuitions as to how we should understand the concept of state in quantum theory. For present purposes, I will ignore these subtleties and follow the usual practice of regarding phase space points as the unambiguous ontic states of classical mechanics. []
2. The subtlety is basically a person called Chris Fuchs. He is clearly in the option 2 camp, but claims to be a scientific realist. Whether he is successful at maintaining realism is a matter of debate. []
3. Note, this is distinct from the orthodox interpretation as represented by the textbooks of Dirac and von-Neumann, which is also sometimes called the Copenhagen interpretation. Orthodoxy accepts the eigenvalue-eigenstate link. Observables can sometimes have definite values, in which case they are objective properties of the system. A system has such a property when it is in an eigenstate of the corresponding observable. Since every wavefunction is an eigenstate of some observable, it follows that this is a psi-ontic view, albeit one in which there are no additional ontic degrees of freedom beyond the quantum state. []
4. Sourced from Wikiquote. []
5. but note that the resulting theory would essentially be the orthodox interpretation, which has a measurement problem. []
6. The terminology “ontic model” is preferred to “hidden variable theory” for two reasons. Firstly, we do not want to exclude the case where the wavefunction is ontic, but there are no extra degrees of freedom (as in the orthodox interpretation). Secondly, it is often the case that the “hidden” variables are the ones that we actually observe rather than the wavefunction, e.g. in Bohmian mechanics the particle positions are not “hidden”. []
7. Generally, we would need to represent dynamics as well, but the PBR theorem does not depend on this. []
The Choi-Jamiolkowski Isomorphism: You’re Doing It Wrong!
As the dear departed Quantum Pontiff used to say: New Paper Dance! I am pretty happy that this one has finally been posted because it is my first arXiv paper since I returned to work, and also because it has gone through more rewrites than Spiderman: The Musical.
What is the paper about, I hear you ask? Well, mathematically, it is about an extremely simple linear algebra trick called the Choi-Jamiolkwoski isomorphism. This is actually two different results: the Choi isomorphism and the Jamiolkowski isomorphism, but people have a habit of lumping them together. This trick is so extremely well-known to quantum information theorists that it is not even funny. One of the main points of the paper is that you should think about what the isomorphism means physically in a new way. Hence the “you’re doing it wrong” in the post title.
First Level Isomorphisms
For the uninitiated, here is the simplest way of describing the Choi isomorphism in a single equation:
\[\Ket{j}\Bra{k} \qquad \qquad \equiv \qquad \qquad \Ket{j} \otimes \Ket{k},\]
i.e. the ismomorphism works by turning a bra into a ket. The thing on the left is an operator on a Hilbert space \(\mathcal{H}\) and the thing on the right is a vector in \(\mathcal{H} \otimes \mathcal{H}\), so the isomorphism says that \(\mathcal{L}(\mathcal{H}) \equiv \mathcal{H} \otimes \mathcal{H}\), where \(\mathcal{L}(\mathcal{H})\) is the space of linear operators on \(\mathcal{H}\).
Here is how it works in general. If you have an operator \(U\) then you can pick a basis for \(\mathcal{H}\) and write \(U\) in this basis as
\[U = \sum_{j,k} U_{j,k} \Ket{j}\Bra{k},\]
where \(U_{j,k} = \Bra{j}U\Ket{k}\). Then you just extend the above construction by linearity and write down a vector
\[\Ket{\Phi_U} = \sum_{j,k} U_{j,k} \Ket{j} \otimes \Ket{k}.\]
It is pretty obvious that we can go in the other direction as well, starting with a vector on \(\mathcal{H}\otimes\mathcal{H}\), we can write it out in a product basis, turn the second ket into a bra, and then we have an operator.
So far, this is all pretty trivial linear algebra, but when we think about what this means physically it is pretty weird. One of the things that is represented by an operator in quantum theory is dynamics, in particular a unitary operator represents the dynamics of a closed system for a discrete time-step. One of the things that is represented by a vector on a tensor product Hilbert space is a pure state of a bipartite system. It is fairly easy to see that (up to normalization) unitary operators get mapped to maximally entangled states under the isomorphism, so, in some sense, a maximally entangled state is “the same thing” as a unitary operator. This is weird because there are some things that make sense for dynamical operators that don’t seem to make sense for states and vice-versa. For example, dynamics can be composed. If \(U\) represents the dynamics from \(t_0\) to \(t_1\) and \(V\) represents the dynamics from \(t_1\) to \(t_2\), then the dynamics from \(t_0\) to \(t_2\) is represented by the product \(VU\). Using the isomorphism, we can define a composition for states, but what on earth does this mean?
Before getting on to that, let us briefly pause to consider the Jamiolkowski version of the isomorphism. The Choi isomorphism is basis dependent. You get a slightly different state if you write down the operator in a different basis. To make things basis independent, we replace \(\mathcal{H}\otimes\mathcal{H}\) by \(\mathcal{H}\otimes\mathcal{H}^*\). \(\mathcal{H}^*\) denotes the dual space to \(\mathcal{H}\), i.e. it is the space of bras instead of the space of kets. In Dirac notation, the Jamiolkwoski isomorphism looks pretty trivial. It says
\[\Ket{j}\Bra{k} \qquad \qquad \equiv \qquad \qquad \Ket{j} \otimes \Bra{k}.\]
This is axiomatic in Dirac notation, because we always assume that tensor product symbols can be omitted without changing anything. However, this version of the isomorphism is going to become important later.
Conventional Interpretation: Gate Teleportation
In quantum information, the Choi isomorphism is usually interpreted in terms of “gate teleportation”. To understand this, we first reformulate the isomorphism slightly. Let \(\Ket{\Phi^+}_{AA’} = \sum_j \Ket{jj}_{AA’}\), where \(A\) and \(A’\) are quantum systems with Hilbert spaces of the same dimension. The vectors \(\Ket{j}\) form a preferred basis, and this is the basis in which the Choi isomorphism is going to be defined. Note that \(\Ket{\Phi^+}_{AA’}\) is an (unnormalized) maximally entangled state. It is easy to check that the isomorphism can now be reformulated as
\[\Ket{\Phi_U}_{AA'} = I_A \otimes U_A' \Ket{\Phi^+}_{AA'},\]
where \(I_A\) is the identity operator on system \(A\). The reverse direction of the isomorphism is given by
\[U_A \Ket{\psi}\Bra{\psi}_A U_A^{\dagger} = \Bra{\Phi^+}_{A'A''} \left ( \Ket{\psi}\Bra{\psi}_{A''} \otimes \Ket{\Phi_U}\Bra{\Phi_U}_{A'A} \right )\Ket{\Phi^+}_{A'A''},\]
where \(A^{\prime\prime}\) is yet another quantum system with the same Hilbert space as \(A\).
Now let’s think about the physical interpretation of the reverse direction of the isomorphism. Suppose that \(U\) is the identity. In that case, \(\Ket{\Phi_U} = \Ket{\Phi^+}\) and the reverse direction of the isomorphism is easily recognized as the expression for the output of the teleportation protocol when the \(\Ket{\Phi^+}\) outcome is obtained in the Bell measurement. It says that \(\Ket{\psi}\) gets teleported from \(A^{\prime\prime}\) to \(A\). Of course, this outcome only occurs some of the time, with probability \(1/d\), where \(d\) is the dimension of the Hilbert space of \(A\), a fact that is obscured by our decision to use an unnormalized version of \(\Ket{\Phi^+}\).
Now, if we let \(U\) be a nontrivial unitary operator then the reverse direction of the isomorphism says something more interesting. If we use the state \(\Ket{\Phi_U}\) rather than \(\Ket{\Phi^+}\) as our resource state in the teleportation protocol, then, upon obtaining the \(\Ket{\Phi^+}\) outcome in the Bell measurement, the output of the protocol will not simply be the input state \(\Ket{\psi}\), but it will be that state with the unitary \(U\) applied to it. This is called “gate teleportation”. It has many uses in quantum computing. For example, in linear optics implementations, it is impossible to perform every gate in a universal set with 100% probability. To avoid damaging your precious computational state, you can apply the indeterministic gates to half of a maximally entangled state and keep doing so until you get one that succeeds. Then you can teleport your computational state using the resulting state as a resource and end up applying the gate that you wanted. This allows you to use indeterministic gates without having to restart the computation from the beginning every time one of these gates fails.
Using this interpretation of the isomorphism, we can also come up with a physical interpretation of the composition of two states. It is basically a generalization of entanglement swapping. If you take \(\Ket{\Phi_U}\) and \(\Ket{\Phi_{V}}\) and and perform a Bell measurement across the output system of the first and the input system of the second then, upon obtaining the \(\Ket{\Phi^+}\) outcome, you will have the state \(\Ket{\Phi_{UV}}\). In this way, you can perform your entire computational circuit in advance, before you have access to the input state, and then just teleport your input state into the output register as the final step.
In this way, the Choi isomorphism leads to a correspondence between a whole host of protocols involving gates and protocols involving entangled states. We can also define interesting properties of operations, such as the entanglement of an operation, in terms of the states that they correspond to. We then use the isomoprhism to give a physical meaning to these properties in terms of gate teleportation. However, one weak point of the correspondence is that it transforms something deterministic; the application of a unitary operation; into something indeterministic; getting the \(\Ket{\Phi^+}\) outcome in a Bell measurement. Unlike the teleportation protocol, gate teleportation cannot be made deterministic by applying correction operations for the other outcomes, at least not if we want these corrections to be independent of \(U\). The states you get for the other outcomes involve nasty things like \(U^*, U^T, U^\dagger\) applied to \(\Ket{\psi}\), depending on exactly how you construct the Bell basis, e.g. choice of phases. These can typically not be corrected without applying \(U\). In particular, that would screw things up in the linear optics application wherein \(U\) can only be implemented non-deterministically.
Before turning to our alternative interpretation of Choi-Jamiolkowski, let’s generalize things a bit.
Second Level Isomorphisms
In quantum theory we don’t just have pure states, but also mixed states that arise if you have uncertainty about which state was prepared, or if you ignore a subsystem of a larger system that is in a pure state. These are described by positive, trace-one, operators, denoted \(\rho\), called density operators. Similarly, dynamics does not have to be unitary. For example, we might bring in an extra system, interact them unitarily, and then trace out the extra system. These are described by Completely-Positive, Trace-Preserving (CPT) maps, denoted \(\mathcal{E}\). These are linear maps that act on the space of operators, i.e. they are operators on the space of operators, and are often called superoperators.
Now, the set of operators on a Hilbert space is itself a Hilbert space with inner product \(\left \langle N, M \right \rangle = \Tr{N^{\dagger}M}\). Thus, we can apply Choi-Jamiolkowski on this space to define a correspondence between superoperators and operators on the tensor product. We can do this in terms of an orthonormal operator basis with respect to the trace inner product, but it is easier to just give the teleportation version of the isomorphism. We will also generalize slightly to allow for the possibility that the input and output spaces of our CPT map may be different, i.e. it may involve discarding a subsystem of the system we started with, or bringing in extra ancillary systems.
Starting with a CPT map \(\mathcal{E}_{B|A}: \mathcal{L}(\mathcal{H}_A) \rightarrow \mathcal{L}(\mathcal{H}_B)\) from system \(A\) to system \(B\), we can define an operator on \(\mathcal{H}_A \otimes \mathcal{H}_B\) via
\[\rho_{AB} = \mathcal{E}_{B|A'} \otimes \mathcal{I}_{A} \left ( \Ket{\Phi^+}\Bra{\Phi^+}_{AA'}\right ),\]
where \(\mathcal{I}_A\) is the identity superoperator. This is a positive operator, but it is not quite a density operator as it satisfies \(\PTr{B}{\rho_{AB}} = I_A\), which implies that \(\PTr{AB}{\rho_{AB}} = d\) rather than \(\PTr{AB}{\rho_{AB}} = 1\). This is analogous to using unnormalized states in the pure-state case. The reverse direction of the isomorphism is then given by
\[\mathcal{E}_{B|A} \left ( \sigma_A \right ) = \Bra{\Phi^+}_{A'A}\sigma_{A'} \otimes \rho_{AB}\Ket{\Phi^+}_{A'A}.\]
This has the same interpretation in terms of gate teleportation (or rather CPT-map teleportation) as before.
The Jamiolkowski version of this isomorphism is given by
\[\varrho_{AB} = \mathcal{E}_{B|A'} \otimes \mathcal{I}_{A} \left ( \Ket{\Phi^+}\Bra{\Phi^+}_{AA'}^{T_A}\right ),\]
where \(^T_A\) denotes the partial transpose in the basis used to define \(\Ket{\Phi^+}\). Although it is not obvious from this formula, this operator is independent of the choice of basis, as \(\Ket{\Phi^+}\Bra{\Phi^+}_{AA’}^{T_A}\) is actually the same operator for any choice of basis. I’ll keep the reverse direction of the isomorphism a secret for now, as it would give a strong hint towards the punchline of this blog post.
Probability Theory
I now want to give an alternative way of thinking about the isomorphism, in particular the Jamiolkowski version, that is in many ways conceptually clearer than the gate teleportation interpretation. The starting point is the idea that quantum theory can be viewed as a noncommutative generalization of classical probability theory. This idea goes back at least to von Neumann, and is at the root of our thinking in quantum information theory, particularly in quantum Shannon theory. The basic idea of the generalization is that that probability distributions \(P(X)\) get mapped to density operators \(\rho_A\) and sums over variables become partial traces. Therefore, let’s start by thinking about whether there is a classical analog of the isomorphism, and, if so, what its interpretation is.
Suppose we have two random variables, \(X\) and \(Y\). We can define a conditional probability distribution of \(Y\) given \(X\), \(P(Y|X)\), as a positive function of the two variables that satisfies \(\sum_Y P(Y|X) = 1\) independently of the value of \(X\). Given a conditional probability distribution and a marginal distribution, \(P(X)\), for \(X\), we can define a joint distribution via
\[P(X,Y) = P(Y|X)P(X).\]
Conversely, given a joint distribution \(P(X,Y)\), we can find the marginal \(P(X) = \sum_Y P(X,Y)\) and then define a conditional distribution
\[P(Y|X) = \frac{P(X,Y)}{P(X)}.\]
Note, I’m going to ignore the ambiguities in this formula that occur when \(P(X)\) is zero for some values of \(X\).
Now, suppose that \(X\) and \(Y\) are the input and output of a classical channel. I now want to think of the probability distribution of \(Y\) as being determined by a stochastic map \(\Gamma_{Y|X}\) from the space of probability distributions over \(X\) to the space of probability distributions over \(Y\). Since \(P(Y) = \sum_{X} P(X,Y)\), this has to be given by
\[P(Y) = \Gamma_{Y|X} \left ( P(X)\right ) = \sum_X P(Y|X) P(X),\]
\[\Gamma_{Y|X} \left ( \cdot \right ) = \sum_{X} P(Y|X) \left ( \cdot \right )\].
What we have here is a correspondence between a positive function of two variables — the conditional proabability distribution — and a linear map that acts on the space of probability distributions — the stochastic map. This looks analogous to the Choi-Jamiolkowski isomorphism, except that, instead of a joint probability distribution, which would be analogous to a quantum state, we have a conditional probability distribution. This suggests that we made a mistake in thinking of the operator in the Choi isomorphism as a state. Maybe it is something more like a conditional state.
Conditional States
Let’s just plunge in and make a definition of a conditional state, and then see how it makes sense of the Jamiolkowski isomorphism. For two quantum systems, \(A\) and \(B\), a conditional state of \(B\) given \(A\) is defined to be a positive operator \(\rho_{B|A}\) on \(\mathcal{H}_A \otimes \mathcal{H}_B\) that satisfies
\[\PTr{B}{\rho_{B|A}} = I_A.\]
This is supposed to be analogous to the condition \(\sum_Y P(Y|X) = 1\). Notice that this is exactly how the operators that are Choi-isomorphic to CPT maps are normalized.
Given a conditional state, \(\rho_{B|A}\), and a reduced state \(\rho_A\), I can define a joint state via
\[\rho_{AB} = \sqrt{\rho_A} \rho_{B|A} \sqrt{\rho_A},\]
where I have suppressed the implicit \(\otimes I_B\) required to make the products well defined. The conjugation by the square root ensures that \(\rho_{AB}\) is positive, and it is easy to check that \(\PTr{AB}{\rho_{AB}} = 1\).
Conversely, given a joint state, I can find its reduced state \(\rho_A = \PTr{B}{\rho_{AB}}\) and then define the conditional state
\[\rho_{B|A} = \sqrt{\rho_A^{-1}} \rho_{AB} \sqrt{\rho_A^{-1}},\]
where I am going to ignore cases in which \(\rho_A\) has any zero eigenvalues so that the inverse is well-defined (this is no different from ignoring the division by zero in the classical case).
Now, suppose you are given \(\rho_A\) and you want to know what \(\rho_B\) should be. Is there a linear map that tells you how to do this, analogous to the stochastic map \(\Gamma_{Y|X}\) in the classical case? The answer is obviously yes. We can define a map \(\mathfrak{E}_{B|A}: \mathcal{L} \left ( \mathcal{H}_A\right ) \rightarrow \mathcal{L} \left ( \mathcal{H}_B\right )\) via
\[\mathfrak{E}_{B|A} \left ( \rho_A \right ) = \PTr{A}{\rho_{B|A} \rho_A},\]
where we have used the cyclic property of the trace to combine the \(\sqrt{\rho_A}\) terms, or
\[\mathfrak{E}_{B|A} \left ( \cdot \right ) = \PTr{A}{\rho_{B|A} (\cdot)}.\]
The map \(\mathfrak{E}_{B|A}\) so defined is just the Jamiolkowski isomorphic map to \(\rho_{B|A}\) and the above equation gives the reverse direction of the Jamiolkowski isomorphism that I was being secretive about earlier.
The punchline is that the Choi-Jamiolkowski isomorphism should not be thought of as a mapping between quantum states and quantum operations, but rather as a mapping between conditional quantum states and quantum operations. It is no more surprising than the fact that classical stochastic maps are determined by conditional probability distributions. If you think of it in this way, then your approach to quantum information will become conceptually simpler a lot of ways. These ways are discussed in detail in the paper.
Causal Conditional States
There is a subtlety that I have glossed over so far that I’d like to end with. The map \(\mathfrak{E}_{B|A}\) is not actually completely positive, which is why I did not denote it \(\mathcal{E}_{B|A}\), but when preceeded by a transpose on \(A\) it defines a completely positive map. This is because the Jamiolkowski isomorphism is defined in terms of the partial transpose of the maximally entangled state. Also, so far I have been talking about two distinct quantum systems that exist at the same time, whereas in the classical case, I talked about the input and output of a classical channel. A quantum channel is given by a CPT map \(\mathcal{E}_{B|A}\) and its Jamiolkowski representation would be
\[\mathcal{E}_{B|A} \left (\rho_A \right ) = \PTr{A}{\varrho_{B|A}\rho_A},\]
where \(\varrho_{B|A}\) is the partial transpose over \(A\) of a positive operator and it satisfies \(\PTr{B}{\varrho_{B|A}} = I_A\). This is the appropriate notion of a conditional state in the causal scenario, where you are talking about the input and output of a quantum channel rather than two systems at the same time. The two types of conditional state are related by a partial transpose.
Despite this difference, a good deal of unification is achieved between the way in which acausally related (two subsystems) and causally related (input and output of channels) degrees of freedom are described in this framework. For example, we can define a “causal joint state” as
\[\varrho_{AB} = \sqrt{\rho_A} \varrho_{B|A} \sqrt{\rho_A},\]
where \(\rho_A\) is the input state to the channel and \(\varrho_{B|A}\) is the Jamiolkowski isomorphic map to the CPT map. This unification is another main theme of the paper, and allows a quantum version of Bayes’ theorem to be defined that is independent of the causal scenario.
The Wonderful World of Conditional States
To end with, here is a list of some things that become conceptually simpler in the conditional states formalism developed in the paper:
• The Born rule, ensemble averaging, and quantum dynamics are all just instances of a quantum analog of the formula \(P(Y) = \sum_X P(Y|X)P(X)\).
• The Heisenberg picture is just a quantum analog of \(P(Z|X) = \sum_Y P(Z|Y)P(Y|X)\).
• The relationship between prediction and retrodiction (inferences about the past) in quantum theory is given by the quantum Bayes’ theorem.
• The formula for the set of states that a system can be ‘steered’ to by making measurements on a remote system, as in EPR-type experiments, is just an application of the quantum Bayes’ theorem.
If this has whet your appetite, then this and much more can be found in the paper.
Foundations Mailing Lists
Bob Coecke has recently set up an email mailing list for announcements in the foundations of quantum theory (conference announcements, job postings and the like). You can subscribe by sending a blank email to The mailing list is moderated so you will not get inundated by messages from cranks.
On a similar note, I thought I would mention the philosophy of physics mailing list, which has been going for about seven years and also often features announcements that are relevant to the foundations of quantum theory. Obviously, the focus is more on the philosophy side, but I have often heard about interesting conferences and workshops via this list. |
b3938990d09db8b2 | Tech skills: Is it getting harder to keep up?
Professional skills and experience are hard won from education, training and time in the industry. But it's amazing how many people get by despite a fundamental lack of knowledge.
Written in Philadelphia and despatched to TechRepublic at 30Mbps over an open wi-fi hub in my Pittsburgh hotel later the same day.
Almost everything I learned at college and university has been used at some time during my professional career - and I still lean on that seminal education when faced with new and challenging problems today. But by degree, the speed of change has seen many technologies and techniques sidelined by progress during my years in industry.
To probe the rate of change I recently asked an engineering class for a show of hands on a series of topics to get a feel for the knowledge evaporation rate. Out of a class of about 100 mature students the count went like this:
• Who has seen a thermionic tube? = 5
• Does anyone know how they work? =0
• Has anyone know how a cathode ray tube works? = 1
• Does anyone know how a transistor works? = 1
• Who knows how a laser works? = 3
• Who knows how an LED works? = 2
• Who knows how an LED display works? = 3
• Who understands Maxwell's equations? = 0
• Who knows how an antenna works? = 0
• Has anyone heard of the radar range equation? = 0
• Who has heard about the Schrödinger equation? = 11
• Who knows what a compiler is? = 15
I won't go on as I'm sure you get the idea. The big question is: does this lack of fundamental knowledge matter? Perhaps not. So long as someone somewhere does understand, the tech world will keep on spinning. But should the last one with knowledge die, we could quickly be in trouble.
For many of us, keeping abreast or ahead of the game is now an accelerating challenge driven by technologies that span every sector and aspect of companies and society.
We can no longer read all the R&D publications or attend all the conferences and courses to get a filtered and distilled view of progress.
Putting all these issues into some quantified context using the best practice I have come across involves the concept of knowledge half-life. The calculation methods are varied and hardly comprehensive, or indeed fully justified, but it is all we have as a guide to the challenge we now face.
The simplest technique is to reference the citation rates of scientific, technology and engineering publications. On this basis, I have to put together the following graphic for a broad selection of disciplines.
Image: Peter Cochrane/TechRepublic
The most interesting observation to make here is that the medical and marine biology students are out of date before they can even graduate, while the physicists have about 11 years of grace.
What does this tell us? The education system as it stands is no longer doing its job and can't possibly work as we move forward and the situation gets worse.
It is obvious that we have to move on, and it all has to be online and available anywhere, anytime, and in a form fit for purpose. But perhaps the biggest leap will be delegating the role of tutor to some disembodied entity - some machine - able to rapidly access, filter and format what is required well, or just, in time.
In addition, individuals will have to assume a greater responsibility for their own course of study. They will have to choose what they follow as wholly prescriptive education paths fall by the wayside. Moreover, education will be full time from cradle to grave for those in the fastest-moving sectors.
In many respects the world is demanding that students grow up and mature far faster than ever before. They will have to assume greater responsibility and achieve greater authority earlier than any previous generation, and they have to do it in concert with a world of machines and escalating complexity.
Will our young people be able to rise to this change and challenge? We'll soon see, but I certainly intend running with them and giving it go.
Editor's Picks |
96f60759e1cd102e | Symmetry Symmetry 2073-8994 Molecular Diversity Preservation International (MDPI) 10.3390/sym6020396 symmetry-06-00396 Article Invisibility and Symmetry: A Simple Geometrical Viewpoint Sánchez-SotoLuis L.* MonzónJuan J. Departamento de Optica, Facultad de Física, Universidad Complutense, 28040 Madrid, Spain; E-Mail:
Author Contributions: Both authors contributed equally to the theoretical analysis, numerical calculations, and writing of the paper.
Author to whom correspondence should be addressed; E-Mail:; Tel.: +34-91-3944-680; Fax: +34-91-3944-683.
06 2014 22 05 2014 6 2 396 408 24 02 2014 12 05 2014 14 05 2014 © 2014 by the authors; licensee MDPI, Basel, Switzerland. 2014
We give a simplified account of the properties of the transfer matrix for a complex one-dimensional potential, paying special attention to the particular instance of unidirectional invisibility. In appropriate variables, invisible potentials appear as performing null rotations, which lead to the helicity-gauge symmetry of massless particles. In hyperbolic geometry, this can be interpreted, via Möbius transformations, as parallel displacements, a geometric action that has no Euclidean analogy.
PT symmetry SL(2, ℂ) Lorentz group Hyperbolic geometry
The work of Bender and coworkers [16] has triggered considerable efforts to understand complex potentials that have neither parity ( ) nor time-reversal symmetry ( ), yet they retain combined invariance. These systems can exhibit real energy eigenvalues, thus suggesting a plausible generalization of quantum mechanics. This speculative concept has motivated an ongoing debate in several forefronts [7,8].
Quite recently, the prospect of realizing -symmetric potentials within the framework of optics has been put forward [9,10] and experimentally tested [11]. The complex refractive index takes on here the role of the potential, so they can be realized through a judicious inclusion of index guiding and gain/loss regions. These -synthetic materials can exhibit several intriguing features [1214], one of which will be the main interest of this paper, namely, unidirectional invisibility [1517].
In all these matters, the time-honored transfer-matrix method is particularly germane [18]. However, a quick look at the literature immediately reveals the different backgrounds and habits in which the transfer matrix is used and the very little cross talk between them.
To remedy this flaw, we have been capitalizing on a number of geometrical concepts to gain further insights into the behavior of one-dimensional scattering [1926]. Indeed, when one think in a unifying mathematical scenario, geometry immediately comes to mind. Here, we keep going this program and examine the action of the transfer matrices associated to invisible scatterers. Interestingly enough, when viewed in SO(1, 3), they turn to be nothing but parabolic Lorentz transformations, also called null rotations, which play a crucial role in the determination of the little group of massless particles. Furthermore, borrowing elementary techniques of hyperbolic geometry, we reinterpret these matrices as parallel displacements, which are motions without Euclidean counterpart.
We stress that our formulation does not offer any inherent advantage in terms of efficiency in solving practical problems; rather, it furnishes a general and unifying setting to analyze the transfer matrix for complex potentials, which, in our opinion, is more than a curiosity.
Basic Concepts on Transfer Matrix
To be as self-contained as possible, we first briefly review some basic facts on the quantum scattering of a particle of mass m by a local complex potential V(x) defined on the real line ℝ [2734]. Although much of the renewed interest in this topic has been fuelled by the remarkable case of symmetry, we do not use this extra assumption in this Section.
The problem at hand is governed by the time-independent Schrödinger equation H Ψ ( x ) = [ d 2 d x 2 + U ( x ) ] Ψ ( x ) = ɛ Ψ ( x )where ε = 2mE2 and U(x) = 2mV(x)/ћ2, E being the energy of the particle. We assume that U(x) → 0 fast enough as x → ±∞, although the treatment can be adapted, with minor modifications, to cope with potentials for which the limits U± = limx→±∞ U(x) are different.
Since U(x) decays rapidly as |x| ∞, solutions of (1) have the asymptotic behavior Ψ ( x ) = { A + e + ikx + A e ikx x B + e + ikx + B e ikx x
Here, k2 = ε, A± and B± are k-dependent complex coefficients (unspecified, at this stage), and the subscripts + and — distinguish right-moving modes exp(+ikx) from left-moving modes exp(−ikx), respectively.
The problem requires to work out the exact solution of (1) and invoke the appropriate boundary conditions, involving not only the continuity of Ψ(x) itself, but also of its derivative. In this way, one has two linear relations among the coefficients A± and B±, which can be solved for any amplitude pair in terms of the other two; the result can be expressed as a matrix equation that translates the linearity of the problem. Frequently, it is more advantageous to specify a linear relation between the wave amplitudes on both sides of the scatterer, namely, ( B + B ) = M ( A + A )
M is the transfer matrix, which depends in a complicated way on the potential U(x). Yet one can extract a good deal of information without explicitly calculating it: let us apply (3) successively to a right-moving [(A+ = 1, B = 0)] and to a left-moving wave [(A+ = 0, B = 1)], both of unit amplitude. The result can be displayed as ( T 0 ) = M ( 1 R ) , ( R r 1 ) = M ( 0 T r ) ,where Tℓ,r and Rℓ,r are the transmission and reflection coefficients for a wave incoming at the potential from the left and from the right, respectively, defined in the standard way as the quotients of the pertinent fluxes [35].
With this in mind, Equation (4) can be thought of as a linear superposition of the two independent solutions Ψ k ( x ) = { e + ikx + R ( x ) e ikx x , T ( k ) e + ikx x , Ψ k r ( x ) = { T r ( k ) e ikx x , e ikx + R r ( k ) e + ikx x ,which is consistent with the fact that, since ε > 0, the spectrum of the Hamiltonian (1) is continuous and there are two linearly independent solutions for a given value of ε. The wave function Ψ k ( x ) represents a wave incident from −∞ [exp(+ikx)] and the interaction with the potential produces a reflected wave [R(k) exp(−ikx)] that escapes to −∞ and a transmitted wave [T(k) exp(+ikx)] that moves off to −∞. The solution Ψ k ( x ) can be interpreted in a similar fashion.
Because of the Wronskian of the solutions (5) is independent of x, we can compute W ( Ψ k , Ψ k r ) = Ψ k Ψ k r Ψ k Ψ k r first for x → −∞ and then for x → ∞; this gives i 2 k W ( Ψ k , Ψ k r ) = T r ( k ) = T ( k ) T ( k )
We thus arrive at the important conclusion that, irrespective of the potential, the transmission coefficient is always independent of the input direction.
Taking this constraint into account, we go back to the system (4) and write the solution for M as M 11 ( k ) = T ( k ) R ( k ) R r ( k ) T ( k ) , M 12 ( k ) = R r ( k ) T ( k ) , M 21 ( k ) = R ( k ) T ( k ) , M 22 ( k ) = 1 T ( k )
A straightforward check shows that det M = +1, so M ∊ SL(2, ℂ); a result that can be drawn from a number of alternative and more elaborate arguments [36].
One could also relate outgoing amplitudes to the incoming ones (as they are often the magnitudes one can externally control): this is precisely the scattering matrix, which can be concisely formulated as ( B + A ) = S ( A + B )with matrix elements S 11 ( k ) = T ( k ) , S 12 ( k ) = R r ( k ) , S 21 ( k ) = R ( k ) , S 22 ( k ) = T ( k )
Finally, we stress that transfer matrices are very convenient mathematical objects. Suppose that V1 and V2 are potentials with finite support, vanishing outside a pair of adjacent intervals I1 and I2. If M1 and M2 are the corresponding transfer matrices, the total system (with support I1 U I2) is described by M = M 1 M 2
This property is rather helpful: we can connect simple scatterers to create an intricate potential landscape and determine its transfer matrix by simple multiplication. This is a common instance in optics, where one routinely has to treat multilayer stacks. However, this important property does not seem to carry over into the scattering matrix in any simple way [37,38], because the incoming amplitudes for the overall system cannot be obtained in terms of the incoming amplitudes for every subsystem.
Spectral Singularities
The scattering solutions (5) constitute quite an intuitive way to attack the problem and they are widely employed in physical applications. Nevertheless, it is sometimes advantageous to look at the fundamental solutions of (1) in terms of left- and right-moving modes, as we have already used in (2).
Indeed, the two independent solutions of (1) can be formally written down as [39] Ψ k ( + ) ( x ) = e + ikx + x K + ( x , x ) e + i k x d x Ψ k ( ) ( x ) = e ikx + x K ( x , x ) e i k x d x
The kernels K±(x,x ′) enjoy a number of interesting properties. What matters for our purposes is that the resulting Ψ k ( ± ) ( x ) are analytic with respect to k in ℂ+ = {z ∊ ℂ| Imz > 0} and continuous on the real axis. In addition, it is clear that Ψ k ( + ) ( x ) = e + ikx x , Ψ k ( ) ( x ) = e ikx x that is, they are the Jost functions for this problem [31].
Let us look at the Wronskian of the Jost functions W ( Ψ k ( ) , Ψ k ( + ) ), which, as a function of k, is analytical in ℂ+. A spectral singularity is a point k* ∊ ℝ+ of the continuous spectrum of the Hamiltonian (1) such that W ( Ψ k * ( ) , Ψ k * ( + ) ) = 0so Ψ k ( ± ) ( x ) become linearly dependent at k* and the Hamiltonian is not diagonalizable. In fact, the set of zeros of the Wronskian is bounded, has at most a countable number of elements and its limit points can lie in a bounded subinterval of the real axis [40]. There is an extensive theory of spectral singularities for (1) that was started by Naimark [41]; the interested reader is referred to, e.g., Refs. [4246] for further details.
The asymptotic behavior of Ψ k ± ( x ) at the opposite extremes of ℝ with respect to those in (12) can be easily worked out by a simple application of the transfer matrix (and its inverse); viz, Ψ k ( ) ( x ) = M 12 e + ikx + M 22 e ikx x Ψ k ( + ) ( x ) = M 22 e + ikx M 21 e ikx x
Using Ψ k ± ( x ) in (12) and (14), we can calculate i 2 k W ( Ψ k ( ) , Ψ k ( + ) ) = M 22 ( k )Upon comparing with the definition (13), we can reinterpret the spectral singularities as the real zeros of M22(k) and, as a result, the reflection and transmission coefficients diverge therein. The converse holds because M12(k) and M21(k) are entire functions, lacking singularities. This means that, in an optical scenario, spectral singularities correspond to lasing thresholds [4749].
One could also consider the more general case that the Hamiltonian (1) has, in addition to a continuous spectrum corresponding to k ∊ ℝ+, a possibly complex discrete spectrum. The latter corresponds to the square-integrable solutions of that represent bound states. They are also zeros of M22(k), but unlike the zeros associated with the spectral singularities these must have a positive imaginary part [36].
The eigenvalues of S are s ± = 1 M 22 ( k ) [ 1 ± 1 M 11 ( k ) M 22 ( k ) ]At a spectral singularity, s+ diverges, while s_ → M11(k)/2, which suggests identifying spectral singularities with resonances with a vanishing width.
Invisibility and <inline-graphic xlink:href="symmetry-06-00396i1.tif"/> Symmetry
As heralded in the Introduction, unidirectional invisibility has been lately predicted in materials. We shall elaborate on the ideas developed by Mostafazadeh [50] in order to shed light into this intriguing question.
The potential U(x) is called reflectionless from the left (right), if R(k) = 0 and Rr(k) ≠ 0 [Rr(k) = 0 and R(k) ≠ 0]. From the explicit matrix elements in (7) and (9), we see that unidirectional reflectionlessness implies the non-diagonalizability of both M and S. Therefore, the parameters of the potential for which it becomes reflectionless correspond to exceptional points of M and S [51,52].
The potential is called invisible from the left (right), if it is reflectionless from left (right) and in addition T(k) = 1. We can easily express the conditions for the unidirectional invisibility as M 12 ( k ) 0 , M 11 ( k ) = M 22 ( k ) = 1 ( left invisible ) M 21 ( k ) 0 , M 11 ( k ) = M 22 ( k ) = 1 ( right invisible )
Next, we scrutinize the role of -symmetry in the invisibility. For that purpose, we first briefly recall that the parity transformation “reflects” the system with respect to the coordinate origin, so that x−x and the momentum p ↦ − p. The action on the wave function is Ψ ( x ) ( P Ψ ) ( x ) = Ψ ( x )
On the other hand, the time reversal inverts the sense of time evolution, so that xx, p−p and i ↦ −i. This means that the operator implementing such a transformation is antiunitary and its action reads Ψ ( x ) ( T Ψ ) ( x ) = Ψ * ( x )
Consequently, under a combined transformation, we have Ψ ( x ) ( P T Ψ ) ( x ) = Ψ * ( x )
Let us apply this to a general complex scattering potential. The transfer matrix of the -transformed system, we denote by M( ), fulfils ( A + * A * ) = M ( P T ) ( B + * B * )
Comparing with (3), we come to the result M ( P T ) = ( M 1 ) *and, because det M = 1, this means M 11 P T M 22 * , M 12 P T M 12 * , M 21 P T M 21 * , M 22 P T M 11 * ,
When the system is invariant under this transformation [M( ) = M], it must hold M 1 = M *a fact already noticed by Longhi [48] and that can be also recast as [53] Re ( R T ) = Re ( R r T ) = 0
This can be equivalently restated in the form ρ τ = ± π / 2 , ρ r τ = ± π / 2with τ = arg(T) and ρℓ,r = arg(Rℓ,r). Hence, if we look at the complex numbers R, Rr, and T as phasors, Equation (26) tell us that R and Rr are always collinear, while T is simultaneously perpendicular to them. We draw the attention to the fact that the same expressions have been derived for lossless symmetric beam splitters [54]: we have shown that they hold true for any -symmetric structure.
A direct consequence of (23) is that there are particular instances of -invariant systems that are invisible, although not every invisible potential is invariant. In this respect, it is worth stressing, that even ( -symmetric) potentials do not support unidirectional invisibility and the same holds for real ( -symmetric) potentials.
In optics, beam propagation is governed by the paraxial wave equation, which is equivalent to a Schrödinger-like equation, with the role of the potential played here by the refractive index. Therefore, a necessary condition for a complex refractive index to be invariant is that its real part is an even function of x, while the imaginary component (loss and gain profile) is odd.
Relativistic Variables
To move ahead, let us construct the Hermitian matrices X = ( X + X ) ( X + * X * ) = ( | X + | 2 X + X * X + * X | X | 2 )where X± refers to either A± or B±; i.e., the amplitudes that determine the behavior at each side of the potential. The matrices X are quite reminiscent of the coherence matrix in optics or the density matrix in quantum mechanics.
One can verify that M acts on X by conjugation X = M X M
The matrix X′ is associated with the amplitudes B± and X with A±.
Let us consider the set σμ = ( ,σ), with Greek indices running from 0 to 3. The σμ are the identity and the standard Pauli matrices, which constitute a basis of the linear space of 2 × 2 complex matrices. For the sake of covariance, it is convenient to define σ̃μ = σμ = ( ,σ), so that [55] Tr ( σ ˜ μ σ ν ) = 2 δ ν μand δ ν μ ( M ) is the Kronecker delta. To any Hermitian matrix X we can associate the coordinates x μ = 1 2 Tr ( X σ ˜ μ )
The congruence (28) induces in this way a transformation x μ = Λ ν μ ( M ) x νwhere Λ ν μ ( M ) can be found to be Λ ν μ ( M ) = 1 2 Tr ( σ ˜ μ M σ ν M )This equation can be solved to obtain M from Λ. The matrices M and −M generate the same Λ, so this homomorphism is two-to-one. The variables xμ are coordinates in a Minkovskian (1+3)-dimensional space and the action of the system can be seen as a Lorentz transformation in SO(1, 3).
Having set the general scenario, let us have a closer look at the transfer matrix corresponding to right invisibility (the left invisibility can be dealt with in an analogous way); namely, M = ( 1 R 0 1 )where, for simplicity, we have dropped the superscript from Rr, as there is no risk of confusion. Under the homomorphism (32) this matrix generates the Lorentz transformation Λ ( M ) = ( 1 + | R | 2 / 2 Re R Im R | R | 2 / 2 Re R 1 0 Re R Im R 0 1 Im R | R | 2 / 2 Re R Im R 1 | R | 2 / 2 )According to Wigner [56], the little group is a subgroup of the Lorentz transformations under which a standard vector sμ remains invariant. When sμ is timelike, the little group is the rotation group SO(3). If sμ is spacelike, the little group are the boosts SO(1, 2). In this context, the matrix (34) is an instance of a null rotation; the little group when sμ is a lightlike or null vector, which is related to E(2), the symmetry group of the two-dimensional Euclidean space [57].
If we write (34) in the form Λ(M) = exp(iN), we can easily work out that N = ( 0 Re R Im R 0 Re R 0 0 Re R Im R 0 0 Im R 0 Re R Im R 0 )This is a nilpotent matrix and the vectors annihilated by N are invariant by Λ(M). In terms of the Lie algebra so(1, 3), N can be expressed as N = Re R ( K 1 + J 2 ) Im R ( K 2 + J 1 )where Ki generate boosts and Ji rotations (i = 1, 2, 3) [58]. Observe that the rapidity of the boost and the angle of the rotation have the same norm. The matrix N define a two-parameter Abelian subgroup.
Let us take, for the time being, Re R = 0, as it happens for -invariant invisibility. We can express K2 + J1 as the differential operator K 2 + J 1 ( x 2 0 + x 0 2 ) + ( x 2 3 x 3 2 ) = x 2 ( 0 + 3 ) + ( x 0 x 3 ) 2
As we can appreciate, the combinations x 2 , x 0 x 3 , ( x 0 ) 2 ( x 1 ) 2 ( x 3 ) 2remain invariant. Suppressing the inessential coordinate x2, the flow lines of the Killing vector (37) is the intersection of a null plane, x0x3 = c2 with a hyperboloid (x0)2 − (x1)2 − (x3)2 = c3. The case c3 = 0 has the hyperboloid degenerate to a light cone with the orbits becoming parabolas lying in corresponding null planes.
Hyperbolic Geometry and Invisibility
Although the relativistic hyperboloid in Minkowski space constitute by itself a model of hyperbolic geometry (understood in a broad sense, as the study of spaces with constant negative curvature), it is not the best suited to display some features.
Let us consider the customary tridimensional hyperbolic space ℍ3, defined in terms of the upper half-space {(x, y, z) ∊ ℝ3|z > 0}, equipped with the metric [59] d s 2 = d x 2 + d y 2 + d z 2 z
The geodesics are the semicircles in ℍ3 orthogonal to the plane z = 0.
We can think of the plane z = 0 in R3 as the complex plane ℂ with the natural identification (x, y, z) ↦ w = x + iy. We need to add the point at infinity, so that ℂ̂ = ℂU ∞, which is usually referred to as the Riemann sphere and identify ℂ̂ as the boundary of ℍ3.
Every matrix M in SL(2, ℂ) induces a natural mapping in ℂ via Möbius (or bilinear) transformations [60] w = M 11 w + M 12 M 21 w + M 22Note that any matrix obtained by multiplying M by a complex scalar λ gives the same transformation, so a Möbius transformation determines its matrix only up to scalar multiples. In other words, we need to quotient out SL(2, ℂ) by its center { , − }: the resulting quotient group is known as the projective linear group and is usually denoted PSL(2, ℂ).
Observe that we can break down the action (40) into a composition of maps of the form w w + λ , w λ w , w 1 / wwith λ ∊ ℂ. Then we can extend the Mobius transformations to all ℍ3 as follows: ( w , z ) ( w + λ , z ) , ( w , z ) ( λ w , | λ | z ) , ( w , z ) ( w * | w 2 | + z 2 , z | w 2 | + z 2 )The expressions above come from decomposing the action on ℂ̂ of each of the elements of PSL(2, ℂ) in question into two inversions (reflections) in circles in ℂ̂. Each such inversion has a unique extension to ℍ3 as an inversion in the hemisphere spanned by the circle and composing appropriate pairs of inversions gives us these formulas.
In fact, one can show that PSL(2, ℂ) preserves the metric on ℍ3. Moreover every isometry of ℍ3 can be seen to be the extension of a conformal map of ℂ̂ to itself, since it must send hemispheres orthogonal to ℂ̂ to hemispheres orthogonal to ℂ̂, hence circles in ℂ̂ to circles in ℂ̂. Thus all orientation-preserving isometries of ℍ3 are given by elements of PSL(2, ℂ) acting as above.
In the classification of these isometries the notion of fixed points is of utmost importance. These points are defined by the condition w′ = w in (40), whose solutions are w f = ( M 11 M 22 ) ± [ Tr ( M ) ] 2 4 2 M 21So, they are determined by the trace of M. When the trace is a real number, the induced Mobius transformations are called elliptic, hyperbolic, or parabolic, according [Tr(M)]2 is lesser than, greater than, or equal to 4, respectively. The canonical representatives of those matrices are [61] ( e i θ / 2 0 0 e i θ / 2 ) , elliptic ( e ξ / 2 0 0 e ξ / 2 ) , hyperbolic ( 1 λ 0 1 ) , parabolicwhile the induced geometrical actions are w = w e i θ , w = w e ξ , w = w + λthat is, a rotation of angle θ (so fixes the axis z), a squeezing of parameter ξ (it has two fixed points in ℂ̂, no fixed points in ℍ3, and every hyperplane in ℍ3 that contains the geodesic joining the two fixed points in ℂ̂ is invariant); and a parallel displacement of magnitude λ, respectively. We emphasize that this later action is the only one without Euclidean analogy Indeed, in view of (33), this is precisely the action associated to an invisible scatterer. The far-reaching consequences of this geometrical interpretation will be developed elsewhere.
Concluding Remarks
We have studied unidirectional invisibility by a complex scattering potential, which is characterized by a set of invariant equations. Consequently, the -symmetric invisible configurations are quite special, for they possess the same symmetry as the equations.
We have shown how to cast this phenomenon in term of space-time variables, having in this way a relativistic presentation of invisibility as the set of null rotations. By resorting to elementary notions of hyperbolic geometry, we have interpreted in a natural way the action of the transfer matrix in this case as a parallel displacement.
We think that our results are yet another example of the advantages of these geometrical methods: we have devised a geometrical tool to analyze invisibility in quite a concise way that, in addition, can be closely related to other fields of physics.
We acknowledge illuminating discussions with Antonio F. Costa, José F. Carineña and José María Montesinos. Financial support from the Spanish Research Agency (Grant FIS2011-26786) is gratefully acknowledged.
Conflicts of Interest
The authors declare no conflict of interest.
References BenderC.M.BoettcherS.Real spectra in non-Hermitian Hamiltonians having symmetryPhys. Rev. Lett.1998805243524610.1103/PhysRevLett.80.5243 BenderC.M.BoettcherS.MeisingerP.N.-symmetric quantum mechanicsJ. Math. Phys.1999402201222910.1063/1.532860 BenderC.M.BrodyD.C.JonesH.F.Complex extension of quantum mechanicsPhys. Rev. Lett.20028910.1103/PhysRevLett.89.270401 BenderC.M.BrodyD.C.JonesH.F.Must a Hamiltonian be Hermitian?Am. J. Phys.2003711095110210.1119/1.1574043 BenderC.M.Making sense of non-Hermitian HamiltoniansRep. Prog. Phys.200770947101810.1088/0034-4885/70/6/R03 BenderC.M.MannheimP.D. symmetry and necessary and sufficient conditions for the reality of energy eigenvaluesPhys. Lett. A20103741616162010.1016/j.physleta.2010.02.032 AssisP.Non-Hermitian Hamiltonians in Field Theory PT-symmetry and ApplicationsVDMSaarbrücken, Germany2010 MoiseyevN.Non-Hermitian Quantum MechanicsCambridge University PressCambridge, UK2011 El-GanainyR.MakrisK.G.ChristodoulidesD.N.MusslimaniZ.H.Theory of coupled optical -symmetric structuresOpt. Lett.2007322632263410.1364/OL.32.00263217767329 BendixO.FleischmannR.KottosT.ShapiroB.Exponentially fragile symmetry in lattices with localized eigenmodesPhys. Rev. Lett.200910310.1103/PhysRevLett.103.030402 RuterC.E.MakrisK.G.El-GanainyR.ChristodoulidesD.N.SegevM.KipD.Observation of parity-time symmetry in opticsNat. Phys.2010619219510.1038/nphys1515 MakrisK.G.El-GanainyR.ChristodoulidesD.N.MusslimaniZ.H.Beam dynamics in symmetric optical latticesPhys. Rev. Lett.2008100103904:1103904:4 LonghiS.Bloch oscillations in complex crystals with symmetryPhys. Rev. Lett.2009103123601:1123601:4 SukhorukovA.A.XuZ.KivsharY.S.Nonlinear suppression of time reversals in -symmetric optical couplersPhys. Rev. A20108210.1103/PhysRevA.82.043818 AhmedZ.BenderC.M.BerryM.V.Reflectionless potentials and symmetryJ. Phys. A200538L627L63010.1088/0305-4470/38/39/L01 LinZ.RamezaniH.EichelkrautT.KottosT.CaoH.ChristodoulidesD.N.Unidirectional invisibility dnduced by -symmetric periodic structuresPhys. Rev. Lett.201110610.1103/PhysRevLett.106.213901 LonghiS.Invisibility in -symmetric complex crystalsJ. Phys. A20114410.1088/1751-8113/44/48/485302 Sánchez-SotoL.L.MonzónJ.J.BarriusoA.G.CariñenaJ.The transfer matrix: A geometrical perspectivePhys. Rep.201251319122710.1016/j.physrep.2011.10.002 MonzónJ.J.Sánchez-SotoL.L.Lossles multilayers and Lorentz transformations: More than an analogyOpt. Commun.19991621610.1016/S0030-4018(99)00065-6 MonzónJ.J.Sánchez-SotoL.L.Fullly relativisticlike formulation of multilayer opticsJ. Opt. Soc. Am. A1999162013201810.1364/JOSAA.16.002013 MonzónJ.J.YonteT.Sánchez-SotoL.L.Basic factorization for multilayersOpt. Lett.20012637037210.1364/OL.26.00037018040327 YonteT.MonzónJ.J.Sánchez-SotoL.L.CariñenaJ.F.López-LacastaC.Understanding multilayers from a geometrical viewpointJ. Opt. Soc. Am. A20021960360910.1364/JOSAA.19.000603 MonzónJ.J.YonteT.Sánchez-SotoL.L.CariñenaJ.F.Geometrical setting for the classification of multilayersJ. Opt. Soc. Am. A20021998599110.1364/JOSAA.19.000985 BarriusoA.G.MonzónJ.J.Sánchez-SotoL.L.General unit-disk representation for periodic multilayersOpt. Lett.2003281501150310.1364/OL.28.00150112956359 BarriusoA.G.MonzónJ.J.Sánchez-SotoL.L.CariñenaJ.F.Vectorlike representation of multilayersJ. Opt. Soc. Am. A2004212386239110.1364/JOSAA.21.002386 BarriusoA.G.MonzónJ.J.Sánchez-SotoL.L.CostaA.F.Escher-like quasiperiodic heterostructuresJ. Phys. A200942192002:1192002:9 MugaJ.G.PalaoJ.P.NavarroB.EgusquizaI.L.Complex absorbing potentialsPhys. Rep.200439535742610.1016/j.physrep.2004.03.002 LevaiG.ZnojilM.Systematic search for -symmetric potentials with real spectraJ. Phys. A2000337165718010.1088/0305-4470/33/40/313 AhmedZ.Schrödinger transmission through one-dimensional complex potentialsPhys. Rev. A200164042716:1042716:4 AhmedZ.Energy band structure due to a complex, periodic, -invariant potentialPhys. Lett. A200128623123510.1016/S0375-9601(01)00426-1 MostafazadehA.Spectral singularities of complex scattering potentials and infinite reflection and transmission coefficients at real energiesPhys. Rev. Lett.2009102220402:1220402:4 CannataF.DedonderJ.P.VenturaA.Scattering in -symmetric quantum mechanicsAnn. Phys.200732239743310.1016/j.aop.2006.05.011 ChongY.D.GeL.StoneA.D.-symmetry breaking and laser-absorber modes in optical scattering systemsPhys. Rev. Lett.201110610.1103/PhysRevLett.106.093902 AhmedZ.New features of scattering from a one-dimensional non-Hermitian (complex) potentialJ. Phys. A20124510.1088/1751-8113/45/3/032004 BoonsermP.VisserM.One dimensional scattering problems: A pedagogical presentation of the relationship between reflection and transmission amplitudesThai J. Math.201088397 MostafazadehA.Mehri-DehnaviH.Spectral singularities, biorthonormal systems and a two-parameter family of complex point interactionsJ. Phys. A20094210.1088/1751-8113/42/12/125303 AktosunT.A factorization of the scattering matrix for the Schrödinger equation and for the wave equation in one dimensionJ. Math. Phys.1992333865386910.1063/1.529883 AktosunT.KlausM.van der MeeC.Factorization of scattering matrices due to partitioning of potentials in one-dimensional Schrödinger-type equationsJ. Math. Phys.1996375897591510.1063/1.531754 MarchenkoV.A.Sturm-Liouville Operators and Their ApplicationsAMS ChelseaProvidence, RI, USA1986 TuncaG.BairamovE.Discrete spectrum and principal functions of non-selfadjoint differential operatorCzech J. Math.19994968970010.1023/A:1022488631049 NaimarkM.A.Investigation of the spectrum and the expansion in eigenfunctions of a non-selfadjoint operator of the second order on a semi-axisAMS Transl.196016103193 PavlovB.S.The nonself-adjoint Schrödinger operatorsTopics Math. Phys.1967187114 NaimarkM.A.Linear Differential Operators: Part IIUngarNew York, NY, USA1968 SamsonovB.F.SUSY transformations between diagonalizable and non-diagonalizable HamiltoniansJ. Phys. A200538L397L40310.1088/0305-4470/38/21/L04 AndrianovA.A.CannataF.SokolovA.V.Spectral singularities for non-Hermitian one-dimensional Hamiltonians: Puzzles with resolution of identityJ. Math. Phys.201051052104:1052104:22 Chaos-CadorL.García-CalderónG.Resonant states for complex potentials and spectral singularitiesPhys. Rev. A20138710.1103/PhysRevA.87.042114 SchomerusH.Quantum noise and self-sustained radiation of - symmetric systemsPhys. Rev. Lett.201010410.1103/PhysRevLett.104.233601 LonghiS.-symmetric laser absorberPhys. Rev. A20108210.1103/PhysRevA.82.031801 MostafazadehA.Nonlinear spectral singularities of a complex barrier potential and the lasing threshold conditionPhys. Rev. A20138710.1103/PhysRevA.87.063838 MostafazadehA.Invisibility and symmetryPhys. Rev. A20138710.1103/PhysRevA.87.012103 MüllerM.RotterI.Exceptional points in open quantum systemsJ. Phys. A200841244018:1244018:15 Mehri-DehnaviH.MostafazadehA.Geometric phase for non-Hermitian Hamiltonians and its holonomy interpretationJ. Math. Phys.200849082105:1082105:17 MonzónJ.J.BarriusoA.G.Montesinos-AmilibiaJ.M.Sánchez-SotoL.L.Geometrical aspects of PT-invariant transfer matricesPhys. Rev. A20138710.1103/PhysRevA.87.012111 MandelL.WolfE.Optical Coherence and Quantum Optics.Cambridge University PressCambridge, UK1995 BarutA.O.Ra̧czkaR.Theory of Group Representations and ApplicationsPWNWarszaw, Poland1977Section 17.2 WignerE.On unitary representations of the inhomogeneous Lorentz groupAnn. Math.19394014920410.2307/1968551 KimY.S.NozM.E.Theory and Applications of the Poincare GroupReidelDordrecht, The Netherlands1986 WeinbergS.The Quantum Theory of FieldsCambridge University PressCambridge, UK2005Volume 1 IversenB.Hyperbolic GeometryCambridge University PressCambridge, UK1992Chapter VIII RatcliffeJ.G.Foundations of Hyperbolic ManifoldsSpringerBerlin, Germany2006Section 4.3 AndersonJ.W.Hyperbolic GeometrySpringerNew York, NY, USA1999Chapter 3 |
2dc80347c96f7714 | Novel Solution of Wheeler-DeWitt Theory
Novel Solution of Wheeler-DeWitt Theory
Lukasz Andrzej Glinka
Open Access OPEN ACCESS Peer Reviewed PEER-REVIEWED
Novel Solution of Wheeler-DeWitt Theory
Lukasz Andrzej Glinka
B.M. Birla Science Centre, Hyderabad, India
Taking into account the global one-dimensionality conjecture recently proposed by the author, the Cauchy-like analytical wave functional of the Wheeler-DeWitt theory is derived. The crucial point of the integration strategy is canceling of the singular behavior of the effective potential, which is performed through the suitable change of variables introducing the invariant global dimension. In addition, the conjecture is extended onto the wave functionals dependent on both Matter felids as well as the invariant global dimension. Through application of the reduction within the quantum gravity model, the appropriate Dirac equation is obtained and than solved. The case of superposition is also briey discussed.
Cite this article:
• Glinka, Lukasz Andrzej. "Novel Solution of Wheeler-DeWitt Theory." Applied Mathematics and Physics 2.3 (2014): 73-81.
• Glinka, L. A. (2014). Novel Solution of Wheeler-DeWitt Theory. Applied Mathematics and Physics, 2(3), 73-81.
• Glinka, Lukasz Andrzej. "Novel Solution of Wheeler-DeWitt Theory." Applied Mathematics and Physics 2, no. 3 (2014): 73-81.
Import into BibTeX Import into EndNote Import into RefMan Import into RefWorks
1. Introduction
The Wheeler-DeWitt theory, also known as quantum geometrodynamics or quantum General Relativity, is the foundational model of quantum gravity considered in modern theoretical physics, Cf. Ref. [1-38][1]. This model straightforwardly arises from the canonical General Relativity, formulated on the basis of the Arnowitt-Deser-Misner decomposition, well known as the 3 + 1 splitting, of a four-dimensional space-time metric, applied to the Einstein-Hilbert action supplemented by the York-Gibbons-Hawking boundary term. This procedure produces the Hamiltonian action, as well as the primary and secondary constraints satisfying the first-class algebra whose are canonically quantized according to the Dirac method. The quantized Hamiltonian constraint is the quantum evolution known as the Wheeler-DeWitt equation, which is the second order functional differential equation on the abstract configuration space known as the Wheeler superspace and containing all three-dimensional embedded geometries, whose solutions known as wave functionals in general depend on an induced three-metric and Matter fields.
The heart of the matter, however, is the question of integrability of the Wheeler-DeWitt equation, and, for this reason, the possible new physical meaning of the quantum geometrodynamics could arise along with the integration strategy. Since the 1970s, S. W. Hawking and his coauthors [39-63][39] have proposed to solve the Wheeler-DeWitt equation through making use of the formal analogy with the Schrödinger equation of usual quantum mechanics, and applied the Feynman path integral method which, however, generates manifestly non-analytical wave functionals, that is the solutions which do not form the Cauchy surface necessary to the rational analysis of any differential equation. The approach, sometimes called the Hartle-Hawking wave function, is correct from the point of view of quantum field theory, but actually instead of concrete calculations of the path integrals and development of the method beyond the simplest cosmological solutions of the Einstein field equations, the only qualitative ideas which link the Feynman integration with quantum cosmology have been proposed. Since this approach is far from a mathematical consistency, still the question which is neglected in the literature are other possible solutions to the Wheeler-DeWitt equation, including both non-analytical and analytical ones, which could be beyond the solutions defined through the method of path integration. It is worth stressing that, in fact, both any one analytical solution, that is the Cauchy-like wave functional, to quantum geometrodynamics and its possibly interesting physical meaning are still unknown. This point is very unsatisfactory, and, consequently, makes the quantum geometrodynamics a theory produced in the way of a false analogy with the formalism of quantum mechanics.
Nevertheless, the discussion of the qualitative matter of quantum geometrodynamics is not the subject of this paper, whereas we will present here the way to receive the analytical solutions to the Wheeler-DeWitt equation throughout a systematic construction. For this reason, we apply the global one-dimensional conjecture, recently discussed by the author's writings [64, 65, 66], which is immediately rooted in the generic quantum cosmology [67, 68, 69, 70] dedicated to the Einstein-Friedmann Universe. The main result of this conjecture are the wave functional which are dependent on the determinant of the threedimensional metric, which is named the global dimension, and the resulting theory is the Schrödinger quantum mechanics in the one dimension. The crucial point of the integration strategy is application of the suitable change of variables which removes the singular behavior of the effective potential. In result, we receive the concept of invariant global dimension, where the word invariance is related to the invariant integral measure on a spacelike hypersurface, and the presence of Matter fields is included. Finally, we show the way to transform the theory into the suitable Dirac equation, which defined the new strategy for quantum geometrodynamics and is solved to receive the analytical wave functionals.
The paper is organized in the following way. In the Section 2, the basic facts about quantum geometrodynamics are collected. The Section 3 briey discusses the global one-dimensional conjecture, including the concept of the invariant global dimension. The suitable Dirac equation is obtained in the Section 4, and the new type of analytical wave functionals is constructed in the Section 5. In Section 6, certain consequences are presented, whereas in the Section 7 all results are summarized.
2. Canonical Quantum Gravity
Let us recall the basic facts, for details Cf. Ref. [71, 72, 73, 74]. General Relativity, governed by the Einstein field equations{1}
where is a cosmological constant and is a stress-energy tensor of Matter fields, models space-time as a four-dimensional pseudo-Riemannian manifold equipped with a metric the Riemann-Christoffel curvature tensor the Ricci second fundamental form and the Ricci scalar curvature If is closed and has an induced spacelike boundary with an induced metric the second fundamental form and an extrinsic curvature then the field equations (1) are the equations of motion following from the variational principle applied to the Einstein-Hilbert action with the York-Gibbons-Hawking term [75, 76]
and the stress-energy tensor generated by the variational principle is
where is Matter fields Lagrangian. The appropriate embedding theorems allow to make use of the 3 + 1 splitting [77, 78].
for which the action (2) takes the Hamiltonian form with
where are canonical conjugate momenta, and are [79].
with and holds
where is an intrinsic covariant derivative of are generators of the spatial diffeomorphisms [80].
where and the first-class algebra holds
where are the structure constants of the diffeomorphism group, and all other Lie's brackets vanish. Timepreservation [81, 82, 83, 84] of the primary constraints, that is and leads to the secondary constraints - scalar (Hamiltonian) and vector respectively
where the scalar constraint yields dynamics, while the vector one merely reects diffeoinvariance. Making use of the canonical momentum one obtains the Einstein-Hamilton-Jacobi equation
where is the DeWitt metric on the Wheeler superspace [85, 86, 87, 88]. The Dirac quantization method [81, 82, 83, 84]
applied to the constraint (14), yields the Wheeler-DeWitt equation [80, 89, 90].
whereas other first class constraints merely reect diffeoinvariance
and are not important in this model, called quantum geometrodynamics.
3. Global One-dimensional Conjecture
The global one-dimensionality conjecture [64, 65, 66], establishes the strategy within quantum geometrodynamics which allows to receive analytical solutions. Making use of the Jacobi rule for differentiation of a determinant of a metric one obtains
where is the diffeoinvariant variable which is third order in and is the Levi-Civita density. Consequently, one has the differentiation rule
which applied to the quantum geometrodynamics (17), with making the double contraction of the supermetric with an embedding metric, leads to
and finally the Wheeler-DeWitt equation (17) becomes the usual differential equation
where is the effective potential
The first term in (23) describes contribution due to an embedding geometry only, the second one is mix of the cosmological constant and an embedding geometry, and the third component is due to Matter fields and an embedding geometry. In result, one has to deal with wave functional what agrees with the basic diffeoinvariace (18).
The potential (23) has a manifestly singular behavior which however can be canceled through the appropriate change of variables
where we have introduced the new global dimension called here the invariant dimension, which is a functional of the global dimension and, therefore, also diffeoinvariant. With (24) the equation (22) becomes
In fact, is a kind of the gauge, wherein is generic. Note that the following choice
cancels the singularity in (23), and the equation (27) becomes
with the appropriate normalization condition
where is the invariant product functional measure. Note that both and are the Lebesgue-Stieltjes (Radon) integral measures which can be rewritten as the Riemann measures
what relates the superspace to the space-time.
4. The Dirac equation
Eq. (27) can be derived as the Euler-Lagrange equation of motion by variational principle applied to the action
where partial differentiation was used. Choosing the coordinate system so that the boundary term vanishes
and using the standard definition
one obtains the Lagrangian of the Euclidean field theory
for which the corresponding canonical conjugate momentum is
and, therefore, the choice (36) actually means orthogonal coordinates
for any values of and . Applying (39) in (27), one receives
and combining with (39), the appropriate Dirac equation is obtained
where we have employed the notation
and the -matrices algebra consists only one element - the Pauli matrix
where I is the identity matrix, that in itself obey the algebra
Dimensional reduction of the one component second order theory (27) yields the two component first order one (42) determined by the Eucludean Clifford algebra , Cf. Ref. [91], that is the matrix algebra having a complex two-dimensional representation, which decomposes into a direct sum of two isomorphic central simple algebras or a tensor product
Restricting to yield a two-dimensional spin representations; splits it onto a sum of two one-dimensional Weyl representations.
5. Analytic Wave Functional
The Dirac equation (42) can be rewritten in the Schrödinger form
whose most general solution can be written as
where is an initial data vector with respect to only, is a unitary evolution operator
and is a finite integration area in -space, whereas the volume of full configuration space and the averaged energy are
where is a finite integration region of full configuration space. Explicitly
and, consequently, the received wave functional are
whereas the canonical conjugate momentum is
where and are initial data with respect to . Applying (39) in (56), one obtains
where and and calculating
with using (57), one receives the formula
which compared with (56) leads to the system of equations
The first equation of the system (61) yields the relation
where the last integral arises by the first formula in (51), which after application to the second equation gives simply and, moreover, the volume is -invariant
The probability density can be deduced easily by (55)
and, in the light of the relation (40), one has
Assuming the following separation conditions
where and are functionals of only, while and are constant functionals, and applying the usual normalization, one obtains
where the constants A and B are given by the integrals
assumed to be convergent and _nite. The solution to the equation (67)
joined with (39) and (66) gives the di_erential equation for the initial data
which can be integrated
where C is a constant of integration, and gives the formula
which is equivalent to
Because must be a functional of , one has with a constant , and, moreover, Taking into account (70), one obtains
In the light of the equation (40), however, one of the relations is always true
One sees that in any case has discrete values. By the first relation in (76)
where is an integer, while the second relation in (76) gives
For the first case one has
whereas and for the second one
Finally, the invariant one-dimensional wave functional (55) becomes
in the first case of (76), while for the second one
6. Developments
6.1. General Solutions
The general analytic solutions of the reduced quantum geometrodynamics can be now constructed for any induced metric from the solutions (81) and (82). It can be easily seen that
Making use of (83) in the solutions (81) and (82), one obtains the general solutions according to the global one-dimensionality conjecture
are assumed to be convergent and finite constants. The normalization condition
applied in the solutions (87) and (88), leads to
which yield and, therefore,
The solutions (93) and (94) describe two independent quantum gravity states.
6.2. Superposition
Because, the equations (17) and (31) are linear, the superposition
where are arbitrary constants, could be considered as the most general solution, for which the normalization condition (91) is the constraint
For (98) gives simply
The case is more complicated. Note that the constraint (98) gives
or, equivalently, for
which after mutual adding and making use of (98) gives
and, consequently,
The complex decomposition for and applied in (105) leads to
or, equivalently,
Employing (107) within the constraint (98), one obtains
Because both as squares of absolute values, one obtains the values of the constant in dependence on the integral
where for the condition holds.
7. Summary
We have discussed few consequences of quantum geometrodynamics according to the global one-dimensional conjecture. Employment of the conjecture immediately led us to construction of the analytic solutions, wherein the strategy of integration used the concept of invariant dimension instead of the global dimension introduced to remove the singular behavior of the effective potential. In general, the procedure has used for computations the Lebesgue-Stieltjes, or Radon, one-dimensional integrals, and, therefore, meaningfully simplified considerations of quantum gravity and led to analytical wave functionals. Finally, we have discussed developments of the strategy. The first one was construction of the solutions for any induced metric, which differ from the Feynman path integral solutions, whereas the second one was the question of superposition. Certainly, there are open problems related to the novel wave functionals. The reader interested in advancements is advised to take into account the author's monograph [92].
[1] J. R. Klauder (ed.), Magic without magic: John Archibald Wheeler (Freeman, 1972).
In article
[2] C. J. Isham, R. Penrose, and D. W. Sciama (eds.), Quantum Gravity. An Oxford symposium (Oxford University Press, 1975).
In article
[3] R. Balian and J. Zinn-Justin (eds.), Methods in Field Theory. Les Houches, École D' Été De Physique Théorique. Session XXVIII (North-Holland, 1976).
In article
[4] C. J. Isham, R. Penrose, and D. W. Sciama (eds.), Quantum Gravity 2. A second Oxford symposium (Oxford University Press, 1981).
In article
[5] S. M. Christensen (ed.), Quantum Theory of Gravity. Essays in honor of the 60th birthday of Bryce S. DeWitt. (Adam Hilger, 1984).
In article
[6] R. Penrose and C. J. Isham (eds.), Quantum concepts in space and time (Oxford University Press, 1986).
In article
[7] M. A. Markov, V. A. Berezin, and V. P. Frolov (eds.), Quantum Gravity. Proceedings of the Fourth Seminar, May 25-29, 1987, Moscow, USSR (World Scientific, 1988).
In article
[8] J. Audretsch and V. de Sabbata (eds.), Quantum mechanics in curved space-time (Plenum Press, 1990).
In article CrossRef
[9] A. Ashtekar and J. Stachel (eds.), Conceptual problems of quantum gravity (Birkhauser, 1991).
In article
[10] S. Coleman, J. B. Hartle, T. Piran, and S. Weinberg (eds.), Quantum Cosmology and baby Universes (World Scientific, 1991).
In article CrossRef
[11] I. L. Buchbinder, S. D. Odintsov, and I. L. Shapiro, Effective Action in Quantum Gravity (Institute of Physics Publishing, 1992).
In article
[12] D. J. Gross, T. Piran, and S. Weinberg (eds.), Two Dimensional Quantum Gravity and Random Surfaces (World Scientific, 1992).
In article
[13] M. C. Bento, O. Bertolami, J. M. Mourão, and R. F. Picken (eds.), Classical and quantum gravity (World Scientific, 1993).
In article
[14] G. W. Gibbons and S. W. Hawking (eds.), Euclidean Quantum Gravity (World Scientific, 1993).
In article
[15] J. C. Baez (ed.), Knots and Quantum Gravity (Clarendon Press, 1994).
In article
[16] J. Ehlers and H. Friedrich (eds.), Canonical Gravity: From Classical to Quantum (Springer, 1994).
In article CrossRef
[17] G. Esposito, Quantum Gravity, Quantum Cosmology and Lorentzian Geometries (Springer, 1994).
In article CrossRef
[18] E. Prugovečki, Principles of Quantum General Relativity (World Scien- tific, 1995).
In article
[19] R. Gambini and J. Pullin, Loops, Knots, Gauge Theories and Quantum Gravity (Cambridge University Press, 1996).
In article CrossRef
[20] G. Esposito, A. Yu. Kamenshchik, and G. Pollifrone, Euclidean Quan- tum Gravity on Manifolds with Boundary (Springer, 1997).
In article CrossRef
[21] P. Fré, V. Gorini, G. Magli, and U. Moschella, Classical and Quantum Black Holes (Institute of Physics Publishing, 1999).
In article CrossRef
[22] I. G. Avramidi, Heat Kernel and Quantum Gravity (Springer, 2000).
In article
[23] J. Kowalski-Glikman (ed.), Towards Quantum Gravity (Springer, 2000).
In article CrossRef
[24] C. Callender and N. Huggett (eds.), Physics meets philosophy at the Planck scale. Contemporary theories in quantum gravity. (Cambridge University Press, 2001).
In article CrossRef
[25] B. N. Kursunoglu, S. L. Mintz, and A. Perlmutter (eds.), Quantum Grav- ity, Generalized Theory of Gravitation and Superstring Theory-Based Unif ication (Kluwer Academic Press, 2002).
In article CrossRef
[26] S. Carlip, Quantum Gravity in 2+1 Dimensions (Cambridge University Press, 2003).
In article
[27] G. W. Gibbons, E. P. S. Shellard, and S. J. Rankin (eds.), The Future of Theoretical Physics and Cosmology (Cambridge University Press, 2003).
In article
[28] D. Giulini, C. Kiefer, and C. Lämmerzahl (eds.), Quantum Gravity. From Theory To Experimental Search (Springer, 2003).
In article
[29] C. Rovelli, Quantum Gravity (Cambridge University Press, 2004).
In article CrossRef
[30] G. Amelino-Camelia and J. Kowalski-Glikman (eds.), Planck Scale Ef- fects in Astrophysics and Cosmology (Springer, 2005).
In article
[31] A. Gomberoff and D. Marolf (eds.), Lectures on Quantum Gravity (Springer, 2005).
In article CrossRef
[32] D. Rickles, S. French, and J. Saatsi (eds.), The Structural Foundations of Quantum Gravity (Clarendon Press, 2006).
In article CrossRef
[33] B. Carr (ed.), Universe of Multiverse? (Cambridge University Press, 2007).
In article
[34] B. Fauser, J. Tolksdorf, and E. Zeidler (eds.) Quantum Gravity. Mathe- matical Models and Experimental Bounds (Birkh¨auser, 2007).
In article CrossRef
[35] D. Gross, M. Henneaux, and A. Sevrin (eds.), The Quantum Structureof Space and Time (World Scientific, 2007).
In article
[36] C. Kiefer, Quantum Gravity (2nd ed., Oxford University Press, 2007).
In article CrossRef
[37] T. Thiemann, Modern Canonical Quantum General Relativity (Cambridge University Press, 2007).
In article CrossRef
[38] D. Oriti, Approaches to Quantum Gravity. Toward a New Understanding of Space, Time, and Matter (Cambridge University Press, 2009).
In article CrossRef
[39] S. W. Hawking, Commun. Math. Phys. 43, 199 (1975).
In article CrossRef
[40] ibid. 55, 133 (1977).
In article
[41] Phys. Rev. D 13, 191 (1976).
In article CrossRef
[42] ibid. 14, 2460 (1976).
In article
[43] ibid. 18, 1747 (1978).
In article
[44] ibid. 32, 259 (1985).
In article
[45] ibid. 37, 904 (1988).
In article
[46] Phys. Lett. B 134, 403 (1984).
In article CrossRef
[47] Nucl. Phys. B 239, 257 (1984).
In article CrossRef
[48] Phys. Scr. T 117, 49 (2005).
In article
[49] contributions in: S. W. Hawking and W. Israel (eds.), General Relativity: An Einstein centenary survey, pp. 746-785 (Cambridge Uni- versity Press, 1979).
In article
[50] M. Levy and S. Deser (eds.), Recent Developments in Gravitation. Cargese 1978, pp. 145-175 (Plenum Press, 1979).
In article CrossRef
[51] B. S. DeWitt and R. Stora (eds.), Relativity, Groups, and Topology II, pp. 333-381 (Elsevier, 1984).
In article
[52] H. J. de Vega and N. Sánchez (eds.), Field Theory, Quantum Gravity, and Strings. Proceedings of a Seminar Se- ries Held at DAPHE , Observatoire de Meudon, and LPTHE, Université Pierre et Marie Curie, Paris, Between October 1984 and October 1985, pp. 1-46 (Springer, 1986).
In article
[53] J. J. Halliwell, J. Perez–Marcader, and W. H. Zurek (eds.), Physical Origins of Time Assymetry (Cambridge Uni- versity Press, 1992).
In article
[54] S. W. Hawking and W. Israel (eds.), Three hundred years of gravitation, pp. 631-652 (Cambridge University Press, 1987).
In article
[55] G. W. Gibbons and S. W. Hawking, Phys. Rev. D 15, 107 (1977).
In article
[56] J. B. Hartle and S. W. Hawking, Phys. Rev. D 28, 2960 (1983).
In article CrossRef
[57] J. J. Halliwell and S. W. Hawking, Phys. Rev. D 31, 1777 (1985).
In article CrossRef
[58] S. W. Hawking and D. Page, Nucl. Phys. B 298, 789 (1988).
In article CrossRef
[59] S. W. Hawking, R. Laflamme, and G. W. Lyons, Phys. Rev. D 47, 5342 (1993).
In article CrossRef
[60] R. Bousso and S. W. Hawking, SU-ITP-98-26 DAMTP-1998-87.
In article
[61] S. W. Hawking and T. Hertog, Phys. Rev. D 73, 123527 (2006).
In article CrossRef
[62] J. B. Hartle, S. W. Hawking, and T. Hertog, Phys. Rev. Lett. 100, 201301 (2008).
In article CrossRef
[63] Phys. Rev. D 77, 123537 (2008).
In article CrossRef
[64] L. A. Glinka, Grav. Cosmol. 16 (1), pp. 7-15, (2010).
In article CrossRef
[65] Concepts Phys. 6, pp. 19-41 (2009).
In article
[66] New Adv. Phys. 2, pp. 1-62 (2008).
In article
[67] L. A. Glinka, Grav. Cosmol. 15 (4), pp. 317-322 (2009).
In article CrossRef
[68] AIP Conf. Proc. 1018, pp. 94-99 (2008).
In article CrossRef
[69] in E. Ivanov and S. Fedoruk (eds.), Supersym- metries and Quantum Symmetries: Proc. of International Workshop, Dubna, Russia, July 30 - Aug. 4, 2007 (JINR Dubna, 2008), pp. 406-411.
In article
[70] SIGMA 3, pp. 087-100 (2007).
In article
[71] Ch. W. Misner, K. S. Thorne, J. A. Wheeler, Gravitation (Freeman, 1973).
In article
[72] R. M. Wald, General Relativity (University of Chicago, 1984).
In article CrossRef
[73] S. Carroll, Spacetime and Geometry. An Introduction to General Relativity (Addison-Weseley, 2004).
In article
[74] E. Poisson, A relativist's toolkit. The mathematics of black-hole mechan-ics (Cambridge University Press, 2004).
In article CrossRef
[75] J. W. York, Phys. Rev. Lett. 28, 1082 (1972).
In article CrossRef
[76] G. W. Gibbons and S. W. Hawking, Phys. Rev. D 15, 2752 (1977).
In article CrossRef
[77] R. Arnowitt, S. Deser and C. W. Misner, in L. Witten (ed.) Gravitation: An Introduction to Current Research, pp. 227–264 (John Wiley and Sons, 1962).
In article
[78] B. DeWitt, The Global Approach to Quantum F ield Theory (Clarendon Press, 2003).
In article
[79] A. Hanson, T. Regge, and C. Teitelboim, Constrained Hamiltonian Sys- tems (Accademia Nazionale dei Lincei, 1976).
In article
[80] B. S. DeWitt, Phys. Rev. 160, 1113 (1967).
In article CrossRef
[81] P. A. M. Dirac, Lectures on Quantum Mechanics (Belfer Graduate School of Science, Yeshiva University, 1964).
In article
[82] Phys. Rev. 114, 924 (1959).
In article CrossRef
[83] Proc. Roy. Soc. (London) A 246, 326 (1958).
In article
[84] Can. J. Math. 2, 129 (1950).
In article CrossRef
[85] A. E. Fischer, in M. Carmeli, S.I. Fickler, and L. Witten (eds.) Rela- tivity. Proceedings of the Relativity Conference in the Midwest held at Cincinnati, Ohio, June 2-6, 1969, pp. 303–359 (Plenum Press, 1970).
In article
[86] Gen. Rel. Grav. 15, 1191 (1983).
In article CrossRef
[87] J. Math. Phys. 27, 718 (1986).
In article CrossRef
[88] B. S. DeWitt, in M. Carmeli, S.I. Fickler, and L. Witten (eds.) Rela- tivity. Proceedings of the Relativity Conference in the Midwest held at Cincinnati, Ohio, June 2-6, 1969, pp. 359-374 (Plenum Press, 1970).
In article
[89] J. A. Wheeler, Geometrodynamics (Academic Press, 1962). in C. DeWitt and B. DeWitt (eds.) Relativity, Groups, and Topology. Lectures Delivered at Les Houches During the 1963 Session of the Summer School of Theoretical Physics, pp. 317-501 (Gordon and Breach Science Publishers, 1964).
In article
[90] Einsteins Vision (Springer 1968). in C. M. DeWitt and J. A. Wheeler (eds.) Battelle Rencontres 1967 Lectures in Mathematics and Physics, pp. 242-308 (W. A. Benjamin, 1968).
In article
[91] V. V. Fernández, A. M. Moya, and W. A. Rodrigues Jr, Adv. Appl. Clifford Alg. 11, 1 (2001).
In article CrossRef
[92] L.A. Glinka, AEthereal Multiverse: A New Unifying Theoretical Approach to Cosmology, Particle Physics, and Quantum Gravity (Cambridge International Science Publishing, 2012).
In article
1In this paper we use the units in units8πG/3=1, c=1,ћ=1.
• CiteULikeCiteULike
• MendeleyMendeley
• StumbleUponStumbleUpon
• Add to DeliciousDelicious
• FacebookFacebook
• TwitterTwitter
• LinkedInLinkedIn |
6c663636e9e0dd47 |
That’s Paul Ryan, Republican vice-presidential candidate, in a 2005 speech delivered at The Atlas Society–one of many lavishly funded organizations devoted to spreading the thought and philosophy of Ayn Rand (he’s since distanced himself).
There are so many of these organizations it is hard to keep track. Apart from the Atlas Society, there is the Ayn Rand Institute, the Nathaniel Branden Institute, the Anthem Foundation and the Institute for Objectivist Studies. Numerous libertarian think-tanks, like the Cato Institute, promote Rand. Campus groups–which receive funding from objectivist foundations–are everywhere, promoting Rand via slick newsletters (like The Undercurrent: “Obama wants to use Blakely’s earnings to cover the bill for thousands of less productive citizens’ flu shots and groceries,” a typical line reads–Blakely is the noble, visionary entrepreneur who created Spanx.)
fig. 2. To hell with your ‘flu shots,’ parasites
The fantastically rich find in Rand’s celebration of individual achievement a kindred spirit, and support her work with pecuniary enthusiasm: in 1999, McGill University turned down a million-dollar endowment from wealthy businessman Gilles Tremblay, who had given the money in the hopes of creating a chair dedicated to the the study of her work. Then-president Bernard Shapiro commented that “we can’t just sell our souls just for the sake of being richer,” hopefully aware of the irony: "what else is there but getting richer?" Rand literally ends her most famous novel, Atlas Shrugged, with the dollar sign replacing the sign of the cross, traced in the air–indicating the dawn of a new, bold, daringly sophomoric era.
Rand’s books have sold in the millions, never quite losing steam in the half-century since publication. A now-infamous Library of Congress survey placed Atlas Shrugged as the second-most influential book in America, trailing only the Bible–a dubious pairing, perhaps, given Rand’s militant atheism, but one that indeed captures the uneasy tension of contemporary America: the celebrated Protestant ethic versus the spirit of capitalism.
Despite her popular appeal, perennially best-selling books, and the breathless testimonial of politicians, actors and businessmen–Ryan is scarcely alone in his praise–professional academics almost universally disdain Rand. An online poll by widely-read philosophy professor and blogger Brian Leiter had Ayn Rand elected the one thinker who “brings the most disrepute on to our discipline by being associated with it,” by a landslide. She is almost never taught in classrooms. Her name elicits jeers and funny, exasperated tales of fierce, bright undergraduates under her spell arguing her case for hours on end.
This near-unanimous rejection has led to some remarkably uncharitable, and bizarre, attempts to explain away the lack of academic interest: in the Stanford Encyclopedia of Philosophy (SEP) entry on Rand, its authors write that “her advocacy of a minimal state with the sole function of protecting negative individual rights is contrary to the welfare statism of most academics,” claiming outright that the overwhelming majority of professional philosophers and political theorists have been simply unable to fairly evaluate her work because of the biasing factor of their prior political commitments.
Somehow the same ‘welfare statism’ of academics has not prevented the close study of Robert Nozick’s landmark Anarchy, State and Utopia, a sophisticated libertarian text that mounts an original, and far more effective, argument against redistributive policies. Apart from John Rawls’ A Theory of Justice, there is perhaps no more commonly-assigned book in undergraduate political philosophy classes.
Surely there must be some other reason for Rand’s academic neglect. The authors of the SEP entry do go on to suggest an additional number of largely psychological hypotheses having to do with Rand’s dogmatic tone, cult-like following, and emphasis on popular fiction–never entertaining the possibility that professional philosophers think her work is, quite simply, of poor quality. Objectively, ahem, speaking.
The Verma Post: Ayn Rand's Open Letter in Reply to Immanuel Kant
fig. 3. Immanuel Kant: “the preeminent good which we call moral …
is only possible in a rational being.” Oops.
What is Rand’s ‘philosophy’, then? Her own summary may be appropriated:
I am primarily the creator of a new code of morality which has so far been believed impossible, namely a morality not based on faith, not on arbitrary whim, not on emotion, not on arbitrary edict, mystical or social, but on reason; a morality that can be proved by means of logic which can be demonstrated to be true and necessary.
Now may I define what my morality is? Since man’s mind is his basic means of survival […] he has to hold reason as an absolute, by which I mean that he has to hold reason as his only guide to action, and that he must live by the independent judgment of his own mind; ... that his highest moral purpose is the achievement of his own happiness […] that each man must live as an end in himself, and follow his own rational self-interest.
The practical, political conclusions to be drawn from this ‘morality’ are surprisingly specific: a minimal government, for instance, which enforces no minimum wage law, operates no schools, collects no taxes, and merely enforces contracts in an economy that is otherwise entirely laissez-faire. Rational individuals do not come together to create a universal health insurance system in the process of seeking ‘happiness’. They do not pass laws restricting what age a child can work.
fig. 4. Rationality at work.
Unsurprisingly, the politicians and businessmen who admire Rand focus on such policy recommendations and are rather less familiar with, for instance, her grounds for rejecting the analytic-synthetic distinction. There’s a radical disconnect between the impact her political thought and the influence her metaphysics has had. Everybody who likes Rand can defend at great length a number of socio-economic theses; what very few do is discuss the metaphysical underpinnings that purportedly justify her political and social views.
This is unfortunate, because her philosophy attempts to form a coherent system, and these higher-order political views are the direct result of foundational assumptions in metaphysics and logic (and a series of complex derivations from these). This is one case where an opinion on the possibility of a priori knowledge could mean the difference between a school breakfast program and a hungry child.
Now there are two ways to approach Objectivism: first, and most commonly, we may tackle her edifying fiction, which portrays Manichean conflicts between heroic, intelligent ‘producers’ and parasitic ‘looters.’ The latter, mainly by force of numbers and all the vile raiments of democracy, get in the way of the former: they do not understand that they depend, utterly, on these rarefied ubermenschen, who, of course, ultimately triumph. Given the stark morality of the novels, everyone who reads them in a positive light cast themselves quite naturally as noble producers, and certainly not parasites, which, given Rand’s popularity, means we are a society absolutely replete with noble, heroic, rugged geniuses.
Well-meaning readers are taken in by her grandiose, if somewhat turgid, presentation of man as a heroic being, with his own happiness as the moral purpose of his life, with productive achievement as his noblest activity, and reason as his only absolute (Atlas Shrugged).
The values propounded in her work many find stirring and true. To contest Rand, to the true believer, is to besmirch rationality itself, to prize the unremarkable ‘collective’ over the individual, to shrug at excellence, and–from jealousy, or some other base instinct–to hate and undermine one’s betters and undeservedly demand what is theirs.
The other way in is via her ‘system’ of philosophy: resolutely materialistic, godless, and rationalistic. It proceeds largely from a set of basic axioms (‘existence’, ‘identity’, ‘consciousness’) and derives a more-or-less comprehensive set of metaphysical, epistemological and ethical views. Here we have a complicated internal jargon (which resists assimilation into the analytic vernacular) and a set of post-Randian writers–Peikoff, Kelley, and others–who have fleshed out and expanded her thought into something like a philosophical system in the traditional sense, the kind of thing that has been largely abandoned in contemporary academic philosophy. One can get a sense of the ‘system’ from a glance at the wikipedia page: there are any number of dubious inferences made, most remarkably from ‘existence’ to ‘identity’ to something like conceptual necessity (and thence to causality itself, defined as the “principle of identity applied to action”–possibly the most cringe-worthy explanation of causality to ever be presented seriously: in effect, we are told that things do as they do because they are as they are.)
From axiomatic bases the edifice is built: existence exists and is characterized by identity, which is populated by conscious beings, who must use reason to survive as individuals, and the dictates of reason force us to admit that rational self-interest is the only metaphysically coherent way forward, logically implying capitalism and free markets. To deny this is to deny that A is A.
Ayn Rand in New York, 1957
"I think she’s [Rand] one of the greatest people of all time. Ultimately, in philosophy,
she’s going to be one of the giants. I mean, she’ll be up there with Plato and Aristotle."
That’s Dr. Yaron Brook, who holds a PhD in Finance from the University of Texas at Austin. This provocative quote is culled from a recent interview in which he asserted that we are headed for a new dark ages unless we heed Rand’s wisdom. If we do not, “the next renaissance will begin when her books are rediscovered after 1,000 years of darkness.”
Brook is the director of the Ayn Rand Institute, the largest Objectivist organization, with a budget in the millions and political links to the Tea Party movement. [Question: Do they have links to Ross Perot, the Texas Oil Tycoon? and Independent nominee for President?]
fig 5. The enlightened Dr. Yaron Brooks: "I would like to see the United States turn Fallujah
The incredible conceit that Ayn Rand will figure in the history of philosophy as one of the greats–better than Kant (“corrupt”), Hegel (“nonsensical”) or Wittgenstein (“garbage”)–is not restricted to her contemporary followers: Rand, in the same 1957 interview with Mike Wallace linked above, described herself as the most creative thinker alive. (Corey Robin notes that “Arendt, Quine, Sartre, Camus, Lukács, Adorno, Murdoch, Heidegger, Beauvoir, Rawls, Anscombe and Popper were all at work” in 1957, and invites the reader to draw their own conclusions).
Rand’s extreme self-regard was mirrored in her friends and followers. Former Fed chairman Alan Greenspan–a member of Ayn Rand’s tightly-knit inner circle–only recently, and reluctantly, acknowledged that there may be ‘flaws’ in Rand’s ideology of self-interest. But in the 1960s, he was writing for objectivist newsletters, and praised Rand for decades afterwards: “talking to Rand was like starting a game of chess thinking I was good, and suddenly finding myself in checkmate,” he said. In a 1957 letter to the editor prompted by a dismissive review of Atlas Shrugged, he wrote:
One wonders at the type of celebration of ‘life’ that centers around satisfied joy at the perishing of so-called ‘parasites.’ A boldly totalitarian discourse of justified elimination, produced a scant dozen years after the end of the second world war. It is a tradition upheld by contemporary Randians: Brooks has called for unrestricted, murderous warfare in Iraq (see above); Leonard Peikoff, who originally founded the Ayn Rand Institute, calls for the “immediate end” of “terrorist states” such as Iran, not ruling out nuclear weapons, and this “regardless of the countless innocents caught in the line of fire.”
Admiration for Rand can be found in strange places. Actors from Brad Pitt to Farrah Fawcett have effused praise as well. Supreme Court Justice Clarence Thomas allegedly makes his clerks watch the 1949 film version of The Fountainhead. But the definitive statements belongs to her lover and confidante Nathaniel Branden (indeed, Rand’s heir apparent until a fractious and unsavoury dispute over his termination of their affair). He recalls writing, in all seriousness, that
These were the initial premises presented in Branden’s lecture courses on objectivism, approved and overseen by Rand herself. From this point of view, it is indeed very fortunate for humanity that Rand did not choose to ‘go Galt’ and, like her most famous protagonist, withdraw her genius from us.
Pin by Gogi Margvelashvili on Russian Revilution 1917 | Russian ...
fig 6. Soldiers marching in Petrograd, 1917. Rand was twelve.
Meanwhile, a thriving cottage industry of journalists, essayists, cultural observers and philosophers seem engaged in a one-upmanship contest over who can deride her with the most vicious economy of words possible. George Monbiot says of Rand that her thought “has a fair claim to be the ugliest philosophy the postwar world has produced.” Corey Robin, with a historical flourish, writes that “St. Petersburg in revolt gave us Vladimir Nabokov, Isaiah Berlin and Ayn Rand. The first was a novelist, the second a philosopher. The third was neither but thought she was both.” The late Gore Vidal was scathing even decades ago, writing in 1961 that Rand
"Has a great attraction for simple people who are puzzled by organized society, who object to paying taxes, who dislike the “welfare” state, who feel guilt at the thought of the suffering of others but who would like to harden their hearts […] Ayn Rand’s “philosophy” is nearly perfect in its immorality, which makes the size of her audience all the more ominous and symptomatic."
Criticism has not only come from the left. While Rand’s allure to conservatives is far more pronounced now–despite some lingering misgivings from religious groups–intellectuals on the right despaired of Rand’s growing influence when her books were first published. In the National Review, Whittaker Chambers wrote, in 1957:
This was at a time when many conservative intellectuals saw in the complexity of the world a reason to be trepidant about radical change and wary about the potential for deleterious destabilization it brings, an altogether different form of ‘conservatism’ from that of the present-day marriage of libertarian economics with theological presumption. Chambers rightly saw in Rand a dangerous radical, one who glosses over complexity in her desire to derive political prescription from first principles, an anti-conservative writer par excellence advocating a radically different society:
William F. Buckley, who helped define modern American conservatism by launching and serving as editor-in-chief of The National Review, specifically published Chambers’ critique (and others like it) to purge Rand from conservatismwriting that “her desiccated philosophy’s conclusive incompatibility with the conservative’s emphasis on transcendence, intellectual and moral” meant it was unworthy of the noble tradition of conservative politics, and to be cast out with the Birchers, anti-semites, and white supremacists.
The difference of opinion over the value of Rand’s work could not be more stark. One is hard-pressed to find a ‘moderate’ who finds in Rand some modest value or would characterize her as a decent, or simply good, thinker. She is either genius or a fraud; either a first-rate, world-historical intellectual or a hack writer who appeals to the worst in people by pointing to their wounded self-worth and telling them they are great (“To say “I love you” one must first be able to say the “I” ”, she wrote in The Fountainhead, sounding more like Dr. Phil and less like the heir to Aristotle).
This polarizing effect is remarkable. It is partially a function of the reach of her work: far worse things have been expressed than those ideas contained in Rand’s novels, but almost none have had the impact (ten million grenades handed out on street-corners do more damage than an atom bomb left sitting on a shelf). But the virulence is also a reaction to the breathless fanaticism of her converts, hyperbole matched to hyperbole, in the full knowledge that derision is often more effective than argument in inoculating the undecided.
fig. 7. Greatest. Human. Ever
It is true that Rand’s opponents in popular media often focus on her personal life–her exile from Russia, her ‘rational’ and tawdry affair with Branden, her Hollywood roots, her censorious soirees (hilariously parodied in Rothbard’s one-act ‘play’ Mozart was a Red)–and only mention her ethics and philosophy to disparage the conclusions reached. It appears self-evident that all this talk of ‘existence exists’ as applied to public policy is nonsense, so it suffices to trot out the absurdities of ‘ethical egoism’ and the case is settled.
Rand’s proponents, particularly those of an intellectual bent, find in such ‘evasions’ a confirmation that they hold a rationally acquired set of truths: otherwise, critics of Rand would be able to take on the system, rather than engage in ad hominem or demonstrate emotionally-clouded dislike of her inescapable conclusions (proof positive of their own unreason). Even those who have only felt from the novels an intuitive, undeniable pull know that beneath the pulp of Roark, Taggart and Galt lies a profound set of philosophical doctrines that the high priests can always ably defend and no critic dares touch.
Why Robert Nozick was a libertarian - Big Think
fig. 8. Robert Nozick–Another great, though lesser, human
One of the few academic philosophers to take Rand seriously enough to bother with a critique was our erstwhile libertarian friend Robert Nozick. His short article On the Randian Argument proposes to examine the alleged ‘moral foundations of capitalism’ provided by her system. Almost immediately it devolves in dialectical castigation, with Nozick taking Rand to task for lacking clarity, for failing to adequately support her premises, for drawing unsupported conclusions, and for baldly stating controversial theses as if they were self-evident facts. From the very first, he writes that “I would most like to set out the argument as a deductive argument and then examine the premises. Unfortunately, it is not clear (to me) exactly what the argument is.” His reconstruction is a marvel of patience and charity–combined with lacerating criticism. He sums up the argument:
(1) Only living beings have values with a point.
(2) Therefore, life itself is a value to a living being which has it.
(3) Therefore, life, as a rational person, is a value to the person whose life it is.
(4) Therefore, [there is] “some principle about interpersonal behaviour and rights and purposes.”
The argument is, at some length, considered and demolished. Two quick examples suffice (the interested reader may consult the piece).
Upon examining the premise that ‘life’ is a necessary precondition for the existence of value and is, therefore, a value itself (2), Nozick dryly comments that
"one cannot reach the conclusion that life itself is a value merely by conjoining together many sentences containing the world ‘value’ and ‘life’ or ‘alive’ and hoping that, by some process of association and mixture, this new connection will arise."
The problem is that Rand, Nozick says, does not consider other value-forming concepts during the course of her transcendental argument and has no means to rule them out:
"Cannot content be given to should-statements by … any one of a vast number of other dimensions or possible goals? … it is puzzling why it is claimed that only against a background in which life is (assumed to be) a value, can should-statements be given a sense. It might, of course, be argued, that only against this background can should-statements be given their correct sense, but we have seen no argument for this claim."
Puzzling indeed: certainly alternatives are possible. And it is not that these alternatives do not ‘value life’ themselves–of course they do, derivatively. Rand’s claim is that valuing life must be foundational, but, apart from some intuitive appeal, we are never told why that should be.
More troublesome yet is the leap from premises (1-3) to the vague principles of (4), which Nozick claims involves a number of dubious assumptions–not the least of which is a principle requiring there be no “objective conflicts of interests between persons”, ever. Surely this is too strong: even the Gods are known to quarrel.
Mount Olympus: Five facts about mythological Greek mountain
fig 8. Mt. Olympus
In a footnote, Nozick concludes that Rand’s attraction lay primarily in “the way it handles particular cases, the kind of considerations it brings to bear, its ‘sense of life’.” He continues:
"For many, the first time they encounter a libertarian view saying that a rational life (with individual rights) is possible and justified is in the writings of Miss Rand, and their finding such a view attractive, right, etc., can easily lead them to think that the particular arguments Miss Rand offers for the view are conclusive are adequate."
This is likely correct. Nozick, a libertarian political philosopher himself, is sympathetic to the some of the conclusions Rand draws, but finds himself unable to endorse the arguments presented. The ‘moral’ case for capitalism flounders in a morass of unjustified assumptions and leaps of inference, glossed over by a tone of material certainty. It seems plausible only to the extent that we appreciate her peculiar moral sensibility.
Her ‘metaphysics’ fare no better. This is all the more damning, since her value theory is meant to follow directly from her basic, indubitable axioms: identity, existence, and consciousness.
The abuses of ‘identity’ (“A is A”) have been singled out for particular criticism. Sidney Hook, writing in 1961, notes that:
The extraordinary virtues Miss Rand finds in the law that A is A suggests that she is unaware that logical principles by themselves can test only consistency. They cannot establish truth […] Swearing fidelity to Aristotle, Miss Rand claims to deduce not only matters of fact from logic but, with as little warrant, ethical rules and economic truths as well. As she understands them, the laws of logic license her in proclaiming that “existence exists,” which is very much like saying that the law of gravitation is heavy and the formula of sugar sweet.
The problem, in a nutshell, is that logical principles are devoid of genuine empirical content. One cannot derive particular facts from ‘A is A’ any more than one could conjure a slice of pizza from the Pythagorean theorem. Tautologies are meant to be vacuous. (Certainly, at least, public education is not a logical contradiction the same way a married bachelor, or a four-sided triangle, is.)
Logical technicalities aside, it is worth noting that the most important philosopher in the West since Aristotle has no mathematical or logical philosophy to speak of. Rand was writing in the immediate aftermath of the most fertile period of logical and mathematical development in human history. Her emphasis on ‘logic’ and the indubitable inevitability of her conclusions is made in the shadow of Frege, the set-theoretic paradoxes, and the Principia; of the debate over intuitionism, of the incompleteness proof and of the results of Tarsky, Church, Alonzo; and just at the dawn of paraconsistent logic (which rejects, inter alia, that it is always true that A is A).
Kurt gödel.jpg
fig 9. Kurt Friedrich Gödel, April 28, 1906
Indeed the crisis in the foundations of mathematics, the work of Tarski on truth, the rejection of the law of excluded middle by Brouwer and his followers, Gödel’s proofs–the list could be multiplied–had no effect on her, if she was even aware of any of it. The Atlas Society’s guide to objectivism candidly admits this lacuna, in its entry on the topic of the philosophy of mathematics:
Ayn Rand’s identification of the nature of universals and her analysis of the process of abstraction have much to contribute to the philosophy of mathematics. There is, however, no Objectivist literature on this topic.
Still, the reader will be glad to hear that the problem of universals has been solved, along with the processes that underpin conceptual abstraction. For a philosopher who prized logic, she remained utterly ignorant of it until her death, and some of her most ardent followers are determined to remain so themselves: Peikoff disparages all non-Aristotelian logic as “inherently dishonest […] an explicit rebellion against reason and reality (and, therefore, against man and values).”
Nozick, again, is perfectly clear on the logical issue: Rand is wrong. But it is not only that Rand uses strictly logical principles to derive ethical and political conclusions, which simply cannot happen, but the means by which she goes about the deduction–should we be so indulgent to permit it—is itself a strange wealth of confusion and error:
These ‘illegitimate uses’ are nothing short of extraordinary: John Galt, in Atlas Shrugged–Rand’s own mouthpiece, delivering the radio address than encapsulates her philosophical system–claims that:
Nozick is exactly right in claiming that Rand leverages the ‘principle of identity’ to all kinds of strange metaphysical purposes, including very contentious–if not outright false–conclusions about essentialism. In Galt’s speech we see “A is A” turned into a statement about the essential ‘nature’ of humankind that carries with it the full logical weight of the putative axiom.
Obviously this doesn’t work: suppose we accept that “A is A” (in particular we are not dialetheists; there is certainly no physical theory which introduces terms that violate identity or non-contradiction, and we know this a priori). This implies nothing about man’s nature. Not even that man has a nature at all. Or that it is fixed. That it cannot be changed, or consciously altered.
Even if man has a ‘nature’ in Rand’s sense, our rational aspects are of a piece with our creative capacities, our imaginative selves, our empathetic abilities, our emotional landscape, our sexual drives: to the extent that Rand’s analysis of human nature as rational is meant to be descriptive of what we actually are, it is surely false.
This is arguably not what she means. We have instead a logically-deduced (yet normative) claim about, say, the rational, conscious apprehension of independent reality being central to ‘man’ and his survival–the only value. (Though, obviously, we don’t in fact survive on reason alone. We couldn’t.) This conclusion, obviously, does not follow, or can be justified by, the principle of identity, nor does it seem a particularly good way of going about determining how to structure human civilization.
This last point is perhaps the most crucial: apart from the details of her argument, and the arcane mysteries that her defenders beckon us enter (to learn about ‘measurement omission’ and the “law of identity applied to action” and other putative solutions to open problems), the most fundamental problem is the methodological assumption that reflection on ‘self-evident’ axioms can generate a host of inescapable moral, political, and economic truths.
Here, then, is a methodological digression, to provide a contrast.
My own politics are generally informed by the desire to live in a ‘good’ society. I’m pretty casual about what ‘good’ means, precisely:
some kind of pluralist satisficing compromise borne of reflective equilibrium. Most of us want a society that is free, genuinely meritocratic, absent egregious social strife and inequality (for basic Rawlsian reasons), with just laws and a representative government and opportunities to develop one’s talents and interests without too much interference.
I like to make arguments based on comparative case studies, analysis of available data, incorporation of sundry pragmatic and practical considerations, various heuristic devices (that are admittedly fallible but reliable), with an eye both the desirability outcomes and the caveat that ends don’t always justify means. It’s not particularly elegant, but it’s reasonable and it works. I make no claim to perfect consistency, have no self-contained system, and pretend to no ultimate, objective answers.
Now contrast this with an axiomatic, a priori approach: where one begins with self-evident truths (or ‘first principles’) and then derives conclusions based on analysis of these: we might take private property as a fundamental concept, for instance, and conclude that we have no duties to the poor. In this we proceed as Hobbes did in his Leviathan, Spinoza in the Ethics, or as Rand does with her “three axioms.” As we saw above, from the assertion of indubitable truths (“existence exists” and so on) we conclude, at the end of a long derivation, that essentially the sole purpose of government is the defense of negative individual rights.
fig. 10. “A is A”; therefore, you must sleep with me.
(Yes, Rand basically said this.)
That is difficult to understand is why we should believe that reasoning from so-called ‘first principles’ can tell us anything at all about how to build and maintain something as complex and messy as a human society, with complex social, economic, political arrangements presided over by only partially rational creatures prone to outbursts of passion, crises of confidence, and known, predictable irrationalities.
Axiomatics are useful–more than useful–in many domains. Like in set theory, formal logic and mathematics. But the situation is subtle and messy even in these. A representative example: must we accept the axiom of choice? There are dozens of variations on the formal set-theoretic axioms: mathematicians often use the ZFC axioms, but many don’t.
Another simple example. For the better part of 2,000 years rejection of the Euclidean Parallel Postulate was deemed impossible–until the discovery of non-Euclidean geometries demolished its apodictic standing. If the situation is so difficult even at the level of mathematics and geometry, the standard-bearers of objective purity in human knowledge, what hope is there of deriving monetary policy from ‘A is A’–assuming such a project is even coherent?
In the hard sciences, contemporary physical scientists certainly don’t rely solely on axiomatics. Sure, some theoretical physicists proceed at an extreme level of mathematical abstraction; a certain measure of empirical input is nonetheless required (various physical constants, for example). From chemistry on up it’s perfectly obvious that axiomatics simply don’t work. And we know this for a fact: while quantum mechanics in principle allows us to calculate the properties of chemical systems without having to perform any lab experiments (via the Schrödinger equation) in practice the calculations are far too complex to solve except for the very simplest systems.
This is chemistry–again, a hard science. And then what? Upwards, to biology? Then psychology? Then sociology and political science? Economics? Rand is to make us believe that the axiomatic method can tell us profound truths about the incredibly more complicated, higher-order, non-linear complex systems involved in running a planet? From logic alone?
Note that the prima facie plausibility of any putative axioms has nothing to do with this criticism, which is that the deductive mode of reasoning is completely inapplicable to the topics considered. I don’t even need to examine whether ‘existence exists.’ There is no way Rand’s method is knowledge-producing.
fig. 11. Alan Greenspan–“Turns out selfishness
actually destroys society. Who could’ve guessed?”
Alan Greenspan testified before a senate committee in the aftermath of the financial crisis, in October of 2008. He admitted that
I made a mistake in presuming that the self-interests of organizations, specifically banks and others, were such as that they were best capable of protecting their own shareholders and their equity in the firms […] Those of us who have looked to the self-interest of lending institutions to protect shareholders’ equity, myself included, are in a state of shocked disbelief.
Pressed by committee chair Henry Waxman, who asked pointedly “do you feel that your ideology pushed you to make decisions that you wish you had not made?” Greenspan answered: “Yes, I’ve found a flaw. I don’t know how significant or permanent it is. But I’ve been very distressed by that fact.”
For many on the left, the sub-prime meltdown, financial crisis, and ongoing recession are proof positive that the laissez-faire, deregulatory approach is dead in the water. Rand’s followers have drawn the opposite conclusion: the crisis is the result of too much interference and the failure of governments to fully implement the measures they propose. Going half-way, the argue, simply will not work.
In this they may be right. In a landmark 1956 paper, The General Theory of Second Best, economists R. G. Lipsey and Kelvin Lancaster demonstrate a profound and surprising result: that market failures–that is, failure to achieve some specified optimality condition–may require de-optimization of other parameters. In other words, the “second-best” solution, when the best cannot be achieved, is not necessarily the next most similar to the best solution. The authors prove that it
is not true that a situation in which more, but not all, of the optimum conditions are fulfilled is necessarily, or is even likely to be, superior to a situation in which fewer are fulfilled […] if one of the Paretian optimum conditions cannot be fulfilled a second best optimum situation is achieved only by departing from all other optimum conditions.
Under one intuitive, and ultimately misleading, line of thought, any progress made towards an ‘ideal’ situation is, ipso facto, an improvement. Call this ‘ideal-state incrementalism’ (ISI). Suppose that it were true–per impossible–that Rand’s vision of a society of purely rational egoists, engaged in voluntary cooperation without any state interference would in fact be the best possible arrangement. Under the assumption of ISI, any society that more closely resembles the Randian ideal is better off than one which departs from it in more significant ways. But the theory of ‘second-best’ tells us that ISI is not always true: given some departure some ‘ideal’ conditions in one aspect, it does not follow that the best option is the one where all the other conditions are ideal.(In fact, the authors stress that we cannot know a priori what to do: a detailed, contextual analysis is required.)
fig. 12. We’re hungry, but at least the system is moral.
Imagine the following toy model with three parameters: rationality, regulation, and redistribution. When the parameters are set to maximize rationality and minimize regulation and redistribution, the model achieves its optimal state–imagine, perhaps, these parameters can be set from 0 to 5, so that the ideal, optimal state is when we have the parameters at <5 0="">.
Suppose humans are not always rational (or that information is imperfect, or any other of dozen plausible ways to deviate from the ideal case), so that the parameter value of human rationality is, inescapably, a mere , but we are free to set the parameters for regulation and redistribution. It is not the case that <3 0=""> is the next best solution. It might be <3 2="" 4="">. Or something else entirely. The ideal-state incremental assumption supposes that outcome correlates in a linear fashion to proximity to the ideal state. But this is often false.
My presentation glosses over a number of more technical points. For present purposes we can ignore these and focus on the moral of the story: if the benefits of a Randian society are only tangible when certain onerous optimization conditions are met, then the value in pursuing such a society is proportional to the feasibility of its actual construction. And what are, honestly, the chances of this wondrous rational society? Slim, I suggest, to none. Now multiply this probability by the chance that Rand is right in the first place.
The methodological problem returns in an indirect fashion: if we cannot count on the description of some logical ‘ideal’ state to guide our policy choices–if it is not the case, in other words, that the correct thing to do is always to become more like Rand’s ideal, even assuming Rand’s ideals are correct–then we must proceed in some other way. I’ve outlined one such method above: messy, trial-and-error empirical work, principled yet fumbling, rigorous yet humble, necessarily imperfect and always adapting to contingency as it comes.
The Randian may object to all this that we presume outcome is somehow key to evaluating their position. This, they may protest, assumes a roughly utilitarian view, which they are keen to reject. The point of the Randian ideal is not that her views will prove to be of benefit to all once implemented, but that they are the only coherent moral views that are at all possible.
This is certainly a tactic many objectivists could adopt, if they are comfortable with abandoning the claim that the most moral society is also the most beneficial society, a view that has some currency in orthodox circles (most prominently in Leonard Peikoff’s interpretation of Rand). In any event, the objection presupposes that no amount of general welfare could possibly make up for even the slightest violation of Randian negative rights (as Rand writes in The Virtue of Selfishness, “there can be no compromise on moral principles”).
Yet surely, at some point, most of us would say that, even if a given right was perfectly genuine, there are cases when it can be violated. One man’s ‘right’ to hold a patent on a medicine sometimes yields to the suffering of millions.
The man will get over it. Things are complicated.
fig. 13. How To Win Friends and Influence People, c. 350 BCE
Rejection of the Randian weltanschauung is not tantamount to rejecting all the values espoused within it. Much can be said to commend individualism against conformity, and the virtues of entrepreneurship and self-reliance. But commitment to these values does not logically imply the minimalist state advocated by Rand (let alone opposition to, say, minimum wage laws). They merely add to our existing stock of values to reflect on and take into consideration when deliberating.
Rand should have taken more of a cue from Aristotle, who warned, in the second book of the Nicomachean Ethics, that virtues need to be balanced, for excess and deficiency destroy their virtuous nature:
Aristotle’s first example, appropriately enough, concerns money:
For Aristotle, virtue required a careful weighing–borne of experience–that was able to discern when a virtue became a destructive vice from either a lack or a surfeit. An excess of courage results in rashness. A lack of liberality, meanness. And so on. We can–and should–consider some of virtues Rand holds up as genuine virtues, that is, virtues with a mean, that Rand’s implacable and stark philosophy has distorted beyond recognition. For, in Rand’s view, there is no mean, no discernment, no compromise, no weighing, no evaluation, no gray area:
Morality is a code of black and white. When and if men attempt a compromise, it is obvious which side will necessarily lose and which will necessarily profit […] The cult of moral grayness is a revolt against moral values (The Virtue of Selfishness).
Nothing could be further from Aristotle, not because the doctrine of virtue ethics revels in moral ambiguity–it does not–, but because its methodology involves fallible heuristic deliberation and not absolute fiats: the virtuous man is like this, Aristotle suggests to us, providing practical examples and instructing us to look to those we consider virtuous for guidance so that we improve our character. Rand claims the moral man does this, laying down final rules and telling us they can never be transgressed. Context never matters.
There is nothing wrong with some measure of self-regard or egoism; and there is much to be celebrated in individual accomplishment. No opponent of Rand denies this. But the mean is the thing. To excess, the Randian virtues traits lead to a lack of empathy, a poverty of moral imagination, and an inability to recognize that individual accomplishment is always contextual, performed against a backdrop of happy opportunity and moral luck–and, all too often, a long history of ‘cooperation’ that can certainly not be termed ‘voluntary.’ Individualism, for all its merits, is no excuse for ignoring history. Or for glossing over the plain fact that human behaviour, considered in aggregate, is predictable, and that collective responses to contextual factors is, sometimes, the second-best we can do.
Whether or not any useful moral lessons may be drawn from Rand’s work may depend on individual temperament and ability to read with a grain of salt (or more). Poisoned as her work is by absolutism, dogma, and histrionics, it is perhaps best to leave well enough alone and read, instead, Little House on the Prairie if one hungers for stories of ruggedness and survival.
fig. 14. The free market decided this film sucked.
The fate of the recent movie adaptation of Atlas Shrugged provides an illustrative parable about the dangers of deviation from the Aristotelian mean.
In 1992, ten years after Rand’s death, investor and self-described objectivist John Aglialoro bought the rights to Atlas Shrugged for a million dollars, with the condition that the rights expire within twenty years should no movie be produced. Like many projects, the movie remained in what is termed ‘development hell’ for years–shuffling from writer to writer and studio to studio, with various names attached, actors dropping out, and several false starts. Eventually, as the rights were set to expire, the film was rushed to production with a poor script, little budget, and no famous actors.
Produced at a cost of roughly twenty million dollars, it took in less than five at the box-office. Critics deemed it a flop; even sympathetic audiences found it stilted and clunky. In other words, the rational self-interest of the movie producers–who were set to lose the rights to the film–ensured that a shoddy and mediocre money-loser would make it to cinemas. Perhaps if more focus had been put on, say, creativity, or collaboration, or the selfless dedication art requires–perhaps if Aglialoro had been able to put aside his investment, take the hit, and hand over the movie to more capable hands, the value of the brand might have been better served. As it is, Atlas Shrugged – Part 1 works better as its own cautionary tale about the values it espouses. (The forthcoming sequel, financed by a private debt sale, reminds us that even money-losers can get a free lunch if they serve the right interests).
In final analysis, for all Rand’s emphasis on non-conformity and individualism, the greatest irony is perhaps the sheer amount of charity money that her thought has attracted–in America, at least, it represents an absolutely unprecedented interest in metaphysical speculation, typically the domain of continental Europe.
Now perhaps Rand’s work truly constitutes the most important and greatest progress in philosophy since Plato and Aristotle, superseding Aquinas, Descartes, Hume, Kant, Russell, Wittgenstein, and all the rest. We should count ourselves lucky that the fortunate deign to enlighten us, relieve us of our intellectual torpor, provide us with the genuine grounds for an intellectually serious life free of parasitism, to celebrate the entrepreneur within, and, perhaps, just maybe, let little children get some real work experience. Once free of the encumbrances of a tyrannical collectivist nanny-state that forces unwilling and unwitting children to go to such a vicious and unjust imposition as taxpayer-funded grade school, perhaps the dark ages can be narrowly avoided. Lucky, indeed, that the rich should, just this once, exempt themselves from selfishness to educate pro bono. |
90e00a4d0ebc0de5 | Searching in the dark
Share this on social media:
Jessica Rowbury looks at the latest research in the hunt for dark matter, one of the biggest mysteries in physics
Visible matter represents everything we can touch and see, yet it makes up less than five cent of the universe. The vast majority of the universe is dark, and does not emit or reflect – and cannot be detected with – light. The existence of dark matter has been inferred by astronomers as a result of the gravitational effects on visible matter, radiation and the structure in space, rather than it being observed through direct or indirect methods.
The first evidence for the existence of dark matter was produced in the 1930s, when astronomers observing the motion of galaxies found a discrepancy in their expectation that only accounted for matter that emitted light.
Observations of gravitational lensing have also pointed to matter additional to what is visible. Gravitational lensing, an effect related to Einstein’s general theory of relativity, causes gravity to bend the path of a beam of light in a particular way, meaning that a large object can distort the image of a distant light source in a manner similar to a magnifying glass. By comparing the known position of the source (obtained through direct emission of visible particles from the source) to its distorted image, the distribution of the matter causing the distortion can be reconstructed.
More recently, supercomputer simulations of the universe’s structure have shown that including only visible matter does not reproduce the structures observed in the universe – a closer agreement between observations and simulations is obtained only through including both visible and dark matter.
The presence of dark matter and its amount in the universe can also be inferred from the variations of temperature in the early universe. However, none of these observations provide a clear indication of what dark matter is made of.
Scientists around the globe hope to understand its nature by observing rare dark matter particles and their interactions from space, and by trying to produce them in controlled laboratory conditions.
Detecting in the lab
Scientists working with the Atlas detector at CERN’s Large Hadron Collider (LHC) are attempting to further understand the nature of dark matter by producing it in controlled laboratory conditions.
In March, CERN’s research board approved a new experiment designed to look for light and weakly interacting particles. FASER, or the Forward Search Experiment, will complement CERN’s ongoing physics programme, extending its discovery potential to several new particles associated with dark matter. Astrophysical evidence shows that dark matter makes up about 27 per cent of the universe, but it has never been observed and studied in a laboratory.
3D image of the planned FASER experiment in its tunnel. Credit: FASER/CERN
With an expanding interest in undiscovered particles, particularly long-lived particles and dark matter, new experiments have been proposed to expand the scientific potential of CERN’s accelerator complex and infrastructure as part of the Physics Beyond Collider (PBC) study, under whose aegis FASER operates. ‘This novel experiment helps diversify the physics programme of colliders, such as the LHC, and allows us to address unanswered questions in particle physics from a different perspective,’ explained Mike Lamont, co-coordinator of the PBC study group.
The four main LHC detectors are not suited for detecting the light and weakly interacting particles that might be produced parallel to the beam line. They may travel hundreds of metres without interacting with any material before transforming into known and detectable particles, such as electrons and positrons. The exotic particles would escape the existing detectors along the current beam lines and remain undetected. FASER will therefore be located along the beam trajectory 480 metres downstream from the interaction point in the Atlas detector. Although the protons in the particle beams will be bent by magnets around the LHC, the weakly-interacting particles will continue along a straight line and their ‘decay products’ can be spotted by FASER. The potential new particles would be collimated with the beam, spreading out very little, therefore allowing a relatively small and inexpensive detector to perform highly sensitive searches.
The detector’s total length is less than five metres and its core cylindrical structure has a radius of 10 centimetres. It will be installed in a side tunnel along an unused transfer line which links the LHC to its injector, the Super Proton Synchrotron.
FASER will search for a suite of hypothesised particles, including so-called ‘dark photons’, particles associated with dark matter. The experiment will be installed during the ongoing Long Shutdown 2 and start taking data from LHC’s Run 3 between 2021 and 2023.
‘It is very exciting to have FASER approved for installation at CERN. It is amazing how the collaboration has come together so quickly, and we are looking forward to recording our first data when the LHC starts up again in 2021,’ said Jamie Boyd, co-spokesperson of the FASER experiment.
‘FASER is a neat physics proposal that addresses a particular aspect in the search for physics beyond the standard model, and I am pleased to see it being implemented
so efficiently,’ added Eckhard Elsen, CERN’s director for research and computing.
Dark matter optics
Another area of experiment assumes that the elementary dark matter particles are very light bosons that collectively behave as a wave that self-interacts gravitationally. It is mathematically described with the so-called Schrödinger-Poisson equation (SPE, sometimes also called Schrödinger-Newton or Gross-Pitaevskii-Newton equation), a model introduced as a non-relativistic approximation to the dynamics self-gravitating scalar fields.
According to Humberto Michinel and Ángel Paredes at the Optics Laboratory of the University of Vigo in Ourense, Spain, this scenario provides an interesting opportunity for optics research.
Recently, a number of experiments have been performed to mimic aspects of Newtonian gravitation in non-linear optical setups, opening the possibility of designing optical analogues of gravitational phenomena, the researchers have said. Different versions of the non-linear Schrödinger equation, which are usually solved by means of computer simulations, are typically used for the description of laser light propagation in non-linear media. In particular, the SPE applies, among other situations, to the propagation of a laser beam in a thermo-optical material, namely one in which the refractive index depends on temperature. In this context, the Poisson equation appears naturally as a steady state heat equation.
Thus, the appearance of SPE in different frameworks is a mathematical coincidence that suggests that certain properties of dark matter behaviour should have an optical counterpart, Michinel and Paredes argue. For instance, solitons have been a subject of intense research during the past decades in the context of laser propagation in non-linear optical media. In an optical soliton, the interplay of diffraction, dispersion and non-linear optical properties of the materials gives rise to robust particle-like beams that can travel unlimited distances without any distortion in its shape. These ‘light bullets’ can also trap another light inside them, acting as light-guiding structures, yielding an ever increasing control on light propagation and to deep connections to other areas of physics, light fluid dynamics or cold atoms. Their cosmic equivalent would be the existence of huge self-trapped dark matter solitons that constitute the core of typical galaxies, with sizes in the hundreds or thousands of parsecs. In a sense, they act as waveguides for the galaxies that are trapped in the zone of highest gravitational attraction. This situation is analogous to all-optical soliton waveguides that are well known in non-linear optics.
It is also intriguing to consider soliton collisions from this point of view, noted Michinel and Paredes. In optics, researchers are acquainted with the fact that solitons behave like robust objects but that, when they meet, interference plays a decisive role. The same mechanism might therefore be at play in galactic collisions. Since dark matter solitons are coherent waves, they can interfere constructively or destructively like laser beams in an interferometer. In the case of solitons behaving like robust clumps, destructive interference is analogous to an elastic collision between two particles that separate after hitting each other. It is important to notice that the meticulous observations of galactic clusters stand out among the most promising strategies for understanding dark matter. The displacements of the stars, with respect to the gas and dark matter, give information about the dynamics of the collisions. Interferential optical-like phenomena might result in detectable features that, in fact, might have already been observed.
According to Michinel and Paredes, this ‘dark matter optics’ approach can be applied to the study of other cosmic phenomena, such as the coalescence of solitons in relation to galactic mergers or the interactions between dark matter and supermassive black holes, which can be easily modelled as ‘dot potentials’. Similarly, modulation instability and filamentation may provide an optical analogue of cosmic structure formation, they explained.
Conceptually, it is appealing that, albeit partially, the astrophysical and cosmological dynamics of dark matter can find an analogy in the spatial profiles of laser beams. Moreover, it is worth analysing this interdisciplinary connection, as usual notions or methods of optics might find application for dark matter or vice versa. ‘We hope this remarkable analogy will pave the way to performing tabletop photonics experiments that mimic particular aspects of the behaviour of dark matter in an optical laboratory,’ said Michinel and Paredes. EO |
56a07a89c79de7b8 | Preface to the second edition
Quantum mechanics has been compared to a wolf in sheep’s clothing. While the theory’s formalism can be written down on a napkin, attempts to interpret it fill entire libraries. In this book we attempt to make sense of quantum mechanics in a way that steers clear of two common errors, which jointly account for most of the stacks in those libraries. The vastly more pervasive of the two errors, Ψ-ontology, has its roots in “the bizarre view that we, at this point in history, are in possession of the basic forms of understanding needed to comprehend absolutely anything”, a view that appears to be particularly de rigueur in the philosophy of science. It leads at once to “the great scandal of physics”, “the disaster of objectification”, which consists in the insolubility of the “BIG” measurement problem —the problem of explaining how measurement outcomes arise dynamically. The other, less common, error is that made by the so-called anti-realists, who content themselves with looking upon the theory as a tool for making predictions. What appears to have escaped everyone’s notice is the possibility of a coherent conception of reality that does not fall prey to “our habit of inappropriately reifying our successful abstractions”, a conception that explains why the formal apparatus of quantum mechanics is a probability calculus, and why the events to which (and on the basis of which) it serves to assign probabilities, are possible measurement outcomes.
This book has been written with three kinds of readers in mind. Students may find it to be an invaluable supplement to standard textbooks. While quantum physics makes use of many of the concepts that students are familiar with from classical physics, the manner in which these concepts enter the quantum theory is rarely clarified sufficiently. How, for instance, did momentum become a self-adjoint operator acting on vectors in a Hilbert space? Such fertile sources of perplexity are at once disposed of by the insight that the formal apparatus of the theory is a probability calculus. As one reviewer of the first edition put it:
The way this book covers the two slit experiment everything falls into place and makes perfect sense. There is no wave particle dualism, just the naked necessity of a probabilistic regime. It is so simple. Painfully obvious. Easy to grasp with just a minimum of mathematical rigor. It boggles the mind that QM has not been understood this way from the get go. This feels like 20/20 hindsight writ large…. If you’ve been trying to make sense of QM you will hate this book. It’ll make you feel stupid for not having been able to see this all along.
Teachers may appreciate the resulting disentanglement of the theory’s formalism from its metaphysical issues. My co-author Manu Jaiswal is a case in point. Encouraged by the first edition, he began teaching, with remarkable success, what had previously appeared to him an abstruse subject.
Footnote: In September 2016 Manu received an award for excellence in teaching and research at the Indian Institute of Technology Madras, which was based largely on students’ evaluation.
And finally, the metaphysically interested general reader may welcome this book as the missing link between the proliferating popular literature on quantum mechanics and the equally proliferating academic literature. For them, the requisite mathematical tools are introduced, partly along the way and partly in an Appendix, to the point that all theoretical concepts can be adequately grasped. In doing so, we (Manu and I) tried to adhere to a principle known as “Einstein’s razor,” according to which everything should be made as simple as possible, but no simpler.
The book is divided into three parts. After a short introduction to probability, Part 1 (“Overview”) follows two routes to the Schrödinger equation—the historical route and Feynman’s path–integral approach. On the second route we stop once for a concise introduction to the special theory of relativity. Two sections have been added, one on tunneling and one discussing a quantum bouncing ball.
Part 2 (“A Closer Look”) begins by deriving the theory’s formal apparatus from the obvious existence of “ordinary” objects—stable objects that “occupy space” while being composed of objects that do not “occupy space” (which are commonly thought of as pointlike). We come to understand the need to upgrade from the trivial probability calculus known as classical mechanics to the nontrivial probability calculus known as quantum mechanics, and how to do so. The next two chapters are concerned with what happens if the fuzziness that “fluffs out” matter is ignored. (What happens is that the quantum-mechanical correlation laws degenerate into the dynamical laws of classical physics.) In Chapter 10 the discussion of quantum mechanics resumes along more traditional lines, with new sections on Ehrenfest’s relations, conservation of probability, and the uncertainty relation for non-commuting operators. Chapter 11, on spin 1/2 systems, has a new section on the Stern-Gerlach experiment as an example of an unsharp observable, in which POVMs are introduced. This is followed by a newly added chapter on angular momentum and the hydrogen atom. The chapter on composite systems has been split into two, with new sections on EPR, Kochen and Specker, the respective inequalities of Klyachko and CHSH, and the apparent conflict between quantum mechanics and relativity. The two remaining chapters of Part 2 have survived largely unchanged.
The most significant changes, accounting for the bulk of the nearly 200 pages added, occur in Part 3 (“Making Sense”). Chapter 17 concerns how the founders—in particular, de Broglie, Schrödinger, Heisenberg, and Bohr—sought to make sense of the new theory. The key concept there, introduced by Schrödinger, is that of objectivation, which is both counterpoint and answer to the “disaster of objectification.” Whereas objectification would (if it did) occur in a pre-existent external world, the term “objectivation” refers to the representation of a mentally constructed internal world as a shared objective world. This concept goes back to Kant —easily the most important philosopher of the modern era—who insisted that “we cannot understand anything except that which has something corresponding to our words in intuition”. Schrödinger, Heisenberg, and Bohr would all have agreed with von Weizsäcker—a student of Bohr and Heisenberg—that “those who really want to understand contemporary physics—i.e., not only to apply physics in practice but also to make it transparent—will find it useful, even indispensable at a certain stage, to think through Kant’s theory of science”. As we are doing in this chapter.
Chapter 18 discusses attempts—by von Neumann, London and Bauer, Wigner, and Schrödinger—to come to terms with the role that consciousness plays in our accounts of the physical world. A derivation of quantum mechanics by the transcendental method introduced by Kant is outlined, and the notion that quantum-mechanical indeterminism provides the physical basis of free will is briefly discussed.
Chapter 19 is devoted to QBism, the “new kid on the block” of interpretations of quantum mechanics, which Mermin thinks “is as big a break with 20th century ways of thinking about science as Cubism was with 19th century ways of thinking about art.” The importance of this interpretation is that it roots the definiteness of measurement outcomes as firmly as none other in the personal experiences of each user (of quantum mechanics) or agent (in the quantum world).
The subject of Chap. 20 is Ψ-ontology in its two dominant forms, Everettian quantum mechanics and the de Broglie/Bohm theory, and Chap. 21 deals with environmental decoherence. This makes up for a deficiency of older textbooks (including our first edition) that was pointed out by Tegmark: “If you are considering a quantum textbook that does not mention `Everett’ and `decoherence’ in the index, I recommend buying a more modern one.”
The presentation of our own interpretation begins in Chap. 22, with a statement of the interpretive principle that replaces the eigenvalue-eigenstate link, which is regarded by many as an essential ingredient of the standard formulation of quantum mechanics. Our interpretive principle implies that what is incomplete is not quantum mechanics, as EPR had argued, but the spatiotemporal differentiation of the physical world. This allows us to establish the theory’s semantic consistency (or the should-be unsurprising fact that the quantum-mechanical correlation laws are consistent with the existence of their correlata). Also implied by our interpretive principle is the numerical identity of all fundamental particles in existence, which is the subject of Chap. 23.
In Chap. 24 we come to the heart of our interpretation, the manifestation of the world. Put in the proverbial nutshell: by entering into reflexive spatial relations, Being—that which all existing fundamental particles identically are, which in the first edition was called Ultimate Reality (UR)—creates matter, space, and form, for space is the totality of existing spatial relations, forms resolve themselves into particular sets of spatial relations, and matter is the apparent multitude of the corresponding relata—”apparent” because the relations are reflexive. We come to understand the rationale for the all-important distinction, made by the founders and all but criminally neglected by modern interpreters, between a classical or macroscopic domain and a quantum or microscopic domain. This distinction amounts to a recognition of the difference between the manifested world and its manifestation. Because the latter consists in the gradual realization of distinguishable objects and distinguishable regions of space, the question arises as to how the intermediate stages are to be described, and the answer is that whatever is not completely distinguishable can only be described by assigning probabilities to what is completely distinguishable. This explains why the general theoretical framework of contemporary physics is a calculus of correlations between measurement outcomes. Particles, atoms, and molecules, rather than playing the roles of constituent parts, are instrumental in the process of manifestation, and what is instrumental in the manifestation of the world can only be described in terms of correlations between events that happen (or could happen) in the manifested world.
Chapter 25 summarizes our derivation of the mathematical formalism of quantum mechanics from the existence of “ordinary” objects, and goes on to argue that even the classical (long-range) forces, the nuclear (short-range) forces, and general relativity are preconditions for the possibility of a world that conforms to the classical narrative mode—a world whose properties allow themselves to be sorted into causally evolving bundles (i.e., re-identifiable substances).
In Chap. 26 we turn to the second great theoretical challenge of our time, besides making sense of quantum mechanics, namely the challenge of making sense of the fact that the world appears to exist twice—once for us, in human consciousness, and once again by itself, independently of us. The conclusion that forces itself on us is that there is no such thing as a self-existent external world, and that the “hard problem of consciousness” is as insoluble a pseudo-problem as the “BIG” measurement problem, both of which presuppose such a world. The world is not simply manifested; it is manifested to us. Or else, Being does not simply manifest the world; it manifests the world to itself. It is not only a single substance by which the world exists, but also a single consciousness for which the world exists. How we, at this evolutionary juncture, are related to that consciousness, is the subject of the final chapter, in which we also come to understand why “ordinary” objects (having spatial extent) are “composed” of finite numbers of objects lacking spatial extent, and how Being enters into reflexive relations (and thereby manifests both matter and space).
February 21, 2018 |
af1687fe4d509576 | Next Article in Journal
Entropic Uncertainty Relations for Successive Measurements in the Presence of a Minimal Length
Next Article in Special Issue
Experimental Non-Violation of the Bell Inequality
Previous Article in Journal
Exponential Strong Converse for Source Coding with Side Information at the Decoder
Previous Article in Special Issue
What Constitutes Emergent Quantum Reality? A Complex System Exploration from Entropic Gravity and the Universal Constants
Open AccessArticle
Quantum Trajectories: Real or Surreal?
Authors to whom correspondence should be addressed.
Entropy 2018, 20(5), 353;
Received: 8 April 2018 / Revised: 27 April 2018 / Accepted: 2 May 2018 / Published: 8 May 2018
(This article belongs to the Special Issue Emergent Quantum Mechanics – David Bohm Centennial Perspectives)
The claim of Kocsis et al. to have experimentally determined “photon trajectories” calls for a re-examination of the meaning of “quantum trajectories”. We will review the arguments that have been assumed to have established that a trajectory has no meaning in the context of quantum mechanics. We show that the conclusion that the Bohm trajectories should be called “surreal” because they are at “variance with the actual observed track” of a particle is wrong as it is based on a false argument. We also present the results of a numerical investigation of a double Stern-Gerlach experiment which shows clearly the role of the spin within the Bohm formalism and discuss situations where the appearance of the quantum potential is open to direct experimental exploration.
Keywords: Stern-Gerlach; trajectories; spin Stern-Gerlach; trajectories; spin
1. Introduction
The recent claims to have observed “photon trajectories” [1,2,3] calls for a re-examination of what we precisely mean by a “particle trajectory” in the quantum domain. Mahler et al. [2] applied the Bohm approach [4] based on the non-relativistic Schrödinger equation to interpret their results, claiming their empirical evidence supported this approach producing “trajectories” remarkably similar to those presented in Philippidis, Dewdney and Hiley [5]. However, the Schrödinger equation cannot be applied to photons because photons have zero rest mass and are relativistic “particles” which must be treated differently. In fact details of how to treat photons and the electromagnetic field in the same spirit as the non-relativistic theory have already been given in Bohm [6], Bohm, Hiley and Kaloyerou [7], Holland [8] and Kaloyrou [9], but this work seems to have been ignored. Flack and Hiley [10] have re-examined the results of the experiment of Kocsis et al. [1] in the light of this electromagnetic field approach and have reached the conclusion that these experimentally constructed flow lines can be explained in terms of the momentum components of the energy-momentum tensor of the electromagnetic field. What is being measured is the weak value of the Poynting vector and not the classical Poynting vector suggested in Bliokh et al. [11].
This leaves open the question of the status of the Bohm trajectories calculated from the non-relativistic Schrödinger equation [4,5] for particles with finite rest mass. The validity of the notion of a quantum particle trajectory is certainly controversial. The established view has been unambiguously defined by Landau and Lifshitz [12]:—“In quantum mechanics there is no such concept as the path of a particle”. This position was not arrived at without an extensive discussion going back to the early debates of Bohr and Einstein [13], the pioneering work of Heisenberg [14] and many others [15]. We will not repeat these arguments here.
In contrast to the accepted position, Bohm showed how it was possible to define mathematically the notion of a local momentum, p ( r , t ) = S ( r , t ) , where S ( r , t ) is the phase of the wavefunction. From this definition it is possible to calculate flow-lines which have been interpreted as ‘particle trajectories’ [5]. To support this theory, Bohm [4] showed that under polar decomposition of the wave function, the real part of the Schrödinger equation appears as a deformed Hamilton-Jacobi equation, an equation that had originally been exploited by Madelung [16] and by de Broglie [17].
Initially this simplistic approach was strongly rejected as it seemed in direct contradiction to the arguments that had established the standard interpretation, even though the approach was based on the Schrödinger equation itself with no added new mathematical structures. However, recently this approach has received considerable mathematical support from the extensive work that has been ongoing in the literature exploring the deep relation between classical mechanics and the quantum formalism which has evolved from a field called “pseudo-differential calculus”. Specific relevance of this work to physics can be found in de Gosson [18] and the references found therein.
In this paper we want to examine one specific criticism that has been made against the notion of a “quantum trajectory”, namely the one emanating from the work of Englert et al. [19] (ESSW). They conclude, “the Bohm trajectory is here macroscopically at variance with the actual, that is: observed track. Tersely: Bohm trajectories are not realistic, they are surreal”. A similar strong criticism was voiced in Scully [20] who added that these trajectories were “at variance with common sense”. However the claim of an “observed track" in the above quotation should arouse suspicion coming from authors who claim to defend the standard interpretation as outlined in Landau and Lifshitz [12] .
The first part of the ESSW argument involved what they called the ‘standard analysis’ of a gedanken experiment consisting of several Stern-Gerlach magnets, an experiment that is discussed in Feynman [21]. It is this part of the argument that we examine in this paper. We show that they arrive at the wrong conclusion because they have not carried through the analysis correctly. Although Hiley [22] and Hiley and Callaghan [23] have presented a detailed criticism of this topic before in a different context, the point that we make in this paper is new. The standard use of quantum mechanics itself shows that what ESSW call the “macroscopically observed track” is identical to what has been called the “Bohm trajectory”. We support our arguments with detailed simulations of potential experiments that are being planned at present with our group at UCL.
2. Re-Examination of the Analysis of ESSW
2.1. General Results Using Wave Packets
The ESSW paper [19] contains an error in their analysis of the Stern-Gerlach experiment as shown in Figure 1 which is similar to the set-up shown in Figure 4 appearing in ESSW [19]. It depicts the tracks of spin one-half particles entering two Stern-Gerlach (SG) magnets. The particles enter along the y-axis with their spins initially pointing along this axis. The orientation of the magnetic field in each SG magnet is as shown in the figure, the second SG magnet being twice the length of the first.
On entering the first magnet, the wave packet begins to split into two wave packets which move apart in the magnetic field. The packet, ψ + , moves in the + z direction while the other, ψ , moves in the z direction. Thus the ψ + packet follows the upper track, while the ψ packet follows the lower track. Note here it is the wave packet we are discussing, not the particle.
To account for the z-motion of the packets, we use standard quantum mechanics as in ESSW [19], where the spin-dependent Hamiltonian is
H = 1 2 m P 2 + ε ( t ) σ z F ( t ) z σ z ,
where ε ( t ) σ z is the magnetic energy at z = 0 and F ( t ) z σ z is the energy due to the inhomogeneous field. The two components of the wave function are initially chosen to be
ψ + ( z , 0 ) = ψ ( z , 0 ) = ( 2 π ) 1 / 4 ( 2 δ z 0 ) 1 / 2 exp z 2 δ z 0 2 ,
where δ z 0 is the initial spread in z which is assumed small compared with the eventual maximum separation of the two beams.
At a later time, the equations of motion of the two wave packets are
ψ ± ( z , t ) = A ( t ) exp B ( t ) [ z Δ z ] 2 ± i [ z Δ p + 2 Φ ( t ) ] ,
where A ( t ) = ( 2 π ) 1 / 4 2 δ z 0 + i t 2 m δ z 0 1 / 2 and B ( t ) = 1 4 δ z 0 ( δ z 0 + i t 2 m δ z 0 ) . In arriving at this expression we have used the impulse approximation as presented in Bohm [24]. Here Δ p ( t ) = 0 t d t F ( t ) is the momentum transferred to the “up” wave packet. The actual magnitude is not relevant to our discussion; the interested reader is referred to the original ESSW paper for these details. The magnitude of Φ ( t ) = 2 / 0 t d t ε ( t ) is again not relevant to our argument.
Since no measurement has been made and the two beams are still coherent, the wave function after it has traversed the magnet is written in the form
| Ψ = | ψ + | + z + | ψ | z .
This gives the final probability density as
ρ ( z , t ) = | ψ + ( z , t ) | 2 + | ψ ( z , t ) | 2 ,
showing that there is no interference as the wave packets no longer overlap.
The z-component of the current is given by
j ( z , t ) = 2 i m Ψ z Ψ Ψ z Ψ = m ψ + ψ + + ψ ψ C ( t ) z + ψ + ψ + ψ ψ C ( t ) Δ z + Δ p
where C ( t ) = t / [ 2 m ( ( δ z 0 ) 4 + ( t / 2 m ) 2 ) ] . Note that the probability density is symmetric about the z = 0 plane, while ψ + ψ + ψ ψ is anti-symmetric, showing that the probability current is therefore antisymmetric, therefore,
ρ ( z , t ) = ρ ( z , t ) with j ( z , t ) = j ( z , t ) .
Also, as ψ + ψ + ψ ψ = 0 on the z = 0 plane, j ( z , t ) = 0 at z = 0 . Until this stage we agree totally with the calculations of ESSW using standard quantum mechanics based on conventional wave packet calculations, but it should be noted that this argument only holds when the incident spin is in the y-direction as in the ESSW thought experiment. Particle trajectories have not been discussed so far.
2.2. What Can Be Said about the Behaviour of Individual Particles?
Now we turn to consider what can be inferred about the behaviour of the individual particles, if anything. To answer this question let us return to Landau and Lifshitz [25] who argue that although we cannot talk about a precise particle trajectory, we can talk about the probability of finding a particle in a volume Δ V , provided the volume is large enough so that we avoid any problems associated with the uncertainty principle. Particles will flow into and out of the volume by crossing the boundary of the small volume. In this process we must ensure that probability is conserved.
To see how this works in detail, let us write the well-known conservation of probability equation in integral form. Thus
d d t | Ψ | 2 d V = . j d V = j d Σ
where at the last stage we have used Stokes’ theorem. Here j is the probability current density used to ensure probability conservation. The integral of this current over the surface Σ is the probability that a particle will cross the surface in unit time. By considering a series of connected volumes we can construct what can be regarded as a “macroscopic particle track”. Mott [26] has given a deeper analysis of this process.
Let us now apply this analysis to the situation shown in Figure 1. Construct a surface Σ comprising the z = 0 plane and a surface enclosing the upper half of the figure so as to include the upper parts of the magnet. Since the current density is zero everywhere on the z = 0 plane, no particles can cross this plane. Thus the particles that arrive in the upper-half of the experimental setup must remain in the upper-half and can never cross the z = 0 plane as long as the wave packets remain coherent. This clearly shows that the continuation of the trajectories sketched in Figure 4 of the ESSW paper (as in Figure 1 here) is not correct.
In Figure 5 of their paper, ESSW show more explicitly the spin directions together with a sketch of two Bohm trajectories. This shows that their spin wave packets cross the z = 0 axis whereas the Bohm trajectories do not. ESSW take this to mean that at first, part of the Bohm trajectories follow one of the wave packets and then, after their spin wave packets cross this axis, the trajectories follow the other wave packet. We will show in Section 4.3 the behaviour of their wave packets is not correct because they have not included spin correctly into the Bohm model.
3. The Bohm Approach When Spin Is Included
To give an account of the behaviour of a particle with spin in the non-relativistic limit, we must widen the scope of the Bohm approach. An extended model for a spin-half particle based on the Pauli equation has already been presented in Bohm, Schiller and Tiomno (BST) [27]. Full details of this model have also been discussed in a series of papers by Dewdney et al. [28,29,30,31] and by Holland [32]. This simple model has been applied to neutron diffraction and a single Stern-Gerlach magnet, the results being reported in [29,30]. It should be noted that none of this work is referred to in the ESSW paper and yet this is clearly significant as the Stern-Gerlach magnets operate on the magnetic moments of the particles.
If they had been aware of this work they would not have made the statement that in the Bohm theory a particle has a position and nothing else. In the BST extension, not only do we have position, but also the orientation of the spin vector. Here the Euler angles ( θ , ϕ , ψ ) are used to specify the spin direction. This is essentially the precursor of the flag picture of the spinor presented in Penrose and Rindler [33]. Bell [34] has a simpler model which was also based on the three components of the spin vector. A more general approach using Clifford algebras in which the Pauli spin matrices play a fundamental role has been presented in Hiley and Callaghan [35]. This approach shows how the BST model emerges as a particular representation using Euler angles.
3.1. Spin and the Use of the Pauli Equation
We start with the Pauli equation
i ξ t = H ξ ,
where ξ is the two-component spinor which we write in the form
ξ = R e i ( ψ / 2 ) cos ( θ / 2 ) e i ( ϕ / 2 ) i sin ( θ / 2 ) e i ( ϕ / 2 ) .
Here ( θ , ϕ , ψ ) are the three Euler angles.
The Hamiltonian H is then written in the form
H = 2 2 m i e 2 m A 2 + μ σ . B + V ,
where μ is the magnetic moment of the particle.
The original physical idea here was to assume the particle is a spinning object whose orientation is specified by the three Euler angles ( θ , ϕ , ψ ) . The probability of the particle being at a given position, ( r , t ) , is ρ ( r , t ) = R 2 ( r , t ) = | ξ ( r , t ) | 2 . This means the properties of the Pauli particle are specified by four real numbers ( ρ , θ , ϕ , ψ ) given at the point ( r , t ) . The time evolution of these parameters is determined by the Pauli Equation (5) as we will now show.
It is more convenient to rewrite the wave function in the form
ξ ( r , t ) = R + e i S + R e i S ,
θ = 2 tan 1 R R + ; ψ = S + + S π / 2 ; ϕ = S + S + π / 2 .
To find the velocity of the particle, let us first write the quantity ξ ξ in terms of the Euler angles,
ξ ξ = R R + i 2 R 2 ψ + i 2 cos θ R 2 ϕ .
Then following Hiley [36] we can define a complex local velocity
v = v R e + i v I m = i m ξ ξ ξ ξ
where the probability density is given by R 2 = ξ ξ .
The real part of the local velocity is
v R e = 2 m z ^ ( ψ + cos θ ϕ )
which replaces v ( r , t ) = S ( r , t ) / m defined for the spin-less particle. The imaginary part, which was not discussed by Bohm in his original paper (but see Bohm and Hiley [37]) is called the “osmotic velocity” and has the form
v I m = i m z ^ R R .
We will now use Equations (9) and (10) to simulate the detailed behaviour of the particles and their spin orientations as they traverse the set-up illustrated in Figure 1.
4. Detailed Calculation of the Trajectories
4.1. One Stern-Gerlach Magnet
We begin by simulating the behaviour of the particles having passed through a single Stern-Gerlach magnet. For simplicity we use the impulse approximation given in Bohm [24] to analyse the evolution of a wave packet as it leaves the magnet (A full treatment using Feynman propagators is being prepared by Hiley and Callaghan. This allows us to calculate trajectories inside the SG. Preliminary results confirm the results presented here.).
In the Hamiltonian given in Equation (7), we replace B by the field in the SG magnet, which we write as B μ ( B 0 + z B 0 ) , where B 0 is the field gradient inside the magnet and set A and V to zero.
Following Dewdney et al. [30] and Holland [32], we choose the initial wave function to be
ξ 0 = ξ + + ξ = f ( z ) ( c + u + + c u ) = ( 2 π ) 1 / 2 g ( k ) ( c + u + + c u ) e i k z d k ,
where g ( k ) = ( 2 σ 2 / π ) 1 / 4 e k 2 σ 2 is a normalised Gaussian packet centred at k = 0 in momentum space. Here u + and u are the eigenstates of the spin operator σ z . The solution of the Pauli equation at time t after the particle has left the SG magnetic field is
ξ = ( 2 π ) 1 / 2 d k g ( k ) { c + u + exp i Δ + ( k Δ ) z t 2 m ( k Δ ) 2 + c u exp i Δ + ( k + Δ ) z t 2 m ( k + Δ ) 2 }
where Δ = μ B 0 Δ t / , Δ = μ 0 B 0 Δ t / and Δ t is the time spent in the field. Carrying out the integral we find
ξ ( z , t ) = ( 2 π s t 2 ) 1 / 4 { c + u + exp [ ( z + u t ) 2 / 4 σ s t ] exp i ( Δ + ( z + 1 2 u t ) Δ ) + c u exp [ ( z u t ) 2 / 4 σ s t ] exp i ( Δ + ( z 1 2 u t ) Δ ) } .
Here s t = σ ( 1 + i t / 2 m σ 2 ) , and u = Δ / m . We now write ξ ( t ) in the form
ξ ( z , t ) = c + R + e i S + / u + + c R e i S / u
R ± = 2 π σ 2 1 / 4 ( 1 + 2 t 2 / 4 m 2 σ 4 ) 1 / 4 exp ( z ± u t ) 2 4 σ 2 ( 1 + 2 t 2 / 4 m 2 σ 4 )
S ± / = Δ ( z ± 1 2 u t ) Δ 1 2 tan 1 ( t / 2 m σ 2 ) + t ( z ± u t ) 2 8 m σ 4 ( 1 + 2 t 2 / 4 m 2 σ 4 ) .
We are now in a position to calculate the local velocities from the specific solution given by Equation (12). Since the real part of the local velocity is given by Equation (9), namely, ( ψ + cos θ ϕ ) / 2 m , we need only evaluate ψ / z and ϕ / z since we are only considering the motion along the z-direction. In order to find these derivatives, and those required for the osmotic velocity and the quantum potential, we express the parameters ( ρ , θ , ϕ , ψ ) in terms of ( R + , R , S + , S ) using Equations (8), (13) and (14), and obtain,
ψ z = 4 t z 8 m σ 4 ( 2 t 2 / 4 m 2 σ 4 + 1 ) ,
ϕ z = 2 Δ + 4 u t 2 8 m σ 4 ( 2 t 2 / 4 m 2 σ 4 + 1 ) ,
θ z = sin θ u t σ 2 ( 2 t 2 / 4 m 2 σ 4 + 1 ) ,
1 R R z = z + u t cos θ 2 σ 2 ( 2 t 2 / 4 m 2 σ 4 + 1 ) .
The Bohm velocity given by Equation (9) then becomes
v R e = z ^ 2 m 2 Δ cos θ + t [ z + u t cos θ ] 2 m σ 4 ( 2 t 2 / 4 m 2 σ 4 + 1 ) .
Note here that the second term in the above expression corresponds to the spreading of the wave packet and contributes little to the overall behaviour. The main effect of the field comes from the first term Δ cos θ , which reveals clearly how the velocities and therefore the trajectories are strongly affected by the behaviour of the spin vector. This term depends implicitly on ( z , t , u ) and is responsible for the splitting of the beam.
The imaginary part or osmotic local velocity given in Equation (10), namely, v I m = i [ R / R ] / m , now becomes
v I m = 2 m z ^ [ z + u t cos θ ] σ 2 ( 2 t 2 / 4 m 2 σ 4 + 1 ) .
Note there is no explicit dependence on the magnetic field gradient but there is an implicit dependence through u and cos θ .
These results enable us to calculate specific trajectories and spin vectors for various particle initial positions and for various values of ( c + , c ) should that become necessary. The choice of the latter determine the initial value of the spin vector direction θ which, in our case was chosen to be along the y-direction, hence ( c + , c ) = 1 / 2 . The results shown in figures below are calculated for parameters listed in Table 1.
4.2. Numerical Values for Single Stern-Gerlach Magnet
Integrating Equation (9) will give us the Bohm trajectories. In Figure 2 we show the ensemble of Bohm trajectories and the spin orientations as they leave the Stern-Gerlach magnet, shown in brown at the LHS of the figure. The background colours show the probability density, black being the greatest, while blue is zero.
The dark background shows how the wave packets diverge along straight lines, as do the trajectories. Superimposed on the trajectories are the spin orientations.
Notice that, contrary to the conventional view, the atoms do not immediately “jump” into one or other z-spin eigenstates, rather the spin vectors undergo continuous evolution until they reach their final z-spin eigenstate. This occurs once the two wave packets ψ + ( z , t ) and ψ ( z , t ) have separated and have no significant overlap. The upper beam will contain only atoms with spin “up" in the z-direction while those in the lower beam will all be “down” in the z-direction. Notice also that the rotational changes occur in a magnetic field-free region. We can also see that the alignment of the spin vector at a given y value close to the magnet depends on z, with the spin associated to trajectories closer to the z = 0 axis rotated least. In Section 4.7 we will see that the cause of these behaviours is a torque produced by the quantum potential. These results for a single magnet confirm what was already found in Dewdney et al. [29,30,31].
Figure 3 shows the effect of the osmotic velocity, which we have represented by arrows. They are responsible for maintaining the wave packet profile and will be discussed further in Section 4.6.
4.3. Two Stern-Gerlach Magnets
Having seen how the atoms behave in a single SG magnet, let us now move on to consider two SG magnets with opposite field directions as shown in Figure 1. Note here the second SG magnet is double the length of the first.
The method is similar to the case of the single magnet, except now we use, as initial wave packet, the inverse Fourier transform of the wavefunction at the second magnet at time t = t 1 . We obtain the real part of the local velocity as
v 2 Re = z ^ 2 m 2 Δ 2 2 Δ 1 2 ( t 1 + t ) 2 4 m 2 σ 4 + 1 cos θ + ( t 1 + t ) 2 m σ 4 2 ( t 1 + t ) 2 4 m 2 σ 4 + 1 z + u 2 t cos θ
and the osmotic velocity as
v 2 Im = z ^ m 1 2 σ 2 2 ( t 1 + t ) 2 4 m 2 σ 4 + 1 z + u 1 ( t 1 + t ) + u 2 t cos θ
where t = 0 at the exit of the second magnet. In Figure 4 we have plotted the trajectories together with the spin orientations as the atoms pass through two SG magnets. The details of the parameters used in the calculations are again as listed in Table 1. The position of the second magnet is as indicated by the brown bar between y = 0.1 m and y = 0.12 m.
There are several features of the ensemble of trajectories that are noteworthy. Firstly, at the exit of the second magnet, the wave-packets are refocused toward the y-axis until the inner edge of the packets reaches the axis at y 0.22 m at which point they diverge again.
Secondly, no trajectories are found to cross the z = 0 plane. This should, in fact, not be surprising since v R e can also be obtained from j ( z , t ) / ρ ( z , t ) . This means that the “Bohm trajectories” are identical to the probability flow lines and, as we have seen, the probability flow lines do not cross the z = 0 plane. Thus there is no experimental difference between the Bohm approach and standard quantum mechanics at this stage. It could be argued that it is quantum mechanics that is “at variance with common sense”!
Thirdly, notice once again that the spins do not immediately “jump” into the eigenstates as assumed by the standard theory. Rather they take a small but finite time to reach the final eigenstate as discussed above in Section 3.1. Furthermore note that when the beams are refocused close to the z = 0 plane, at about y = 0.22 , the spin vectors are rotated so that they all become aligned with the y-axis before being rotated again until they end up anti-parallel to the direction with which they entered the second magnet. This rotation is very surprising but is generated by the quantum torque that arises from the quantum potential as we show in the next section in Equation (21).
Furthermore this is in contradiction with Figure 5 of ESSW where they argue that the Bohm trajectories are not realistic because in order to get the observed final spin state, their particles must cross the z = 0 axis. Therefore the present work shows clearly the importance of coupling the spin and the centre of mass motion in order to obtain a correct and consistent analysis of the problem.
Figure 5 shows the direction of the osmotic velocity in the two SG magnets case. Its behaviour is again exactly the same as in the one SG magnet case.
To return the packet to its original state with all the spins pointing in the y-direction, we have to add a third magnet as indicated in the original diagram in Feynman et al. [38]. Thus the Bohm approach gives a complete account of the average behaviour of the individual quantum processes.
4.4. The Appearance of the Quantum Torque
Now let us show the source of the quantum torque. We start by examining the real part of the Pauli Equation (5) under polar decomposition of the wave function, which can be written in the form
1 2 ψ t + cos θ ϕ t + 1 2 m v 2 + Q P + 2 μ σ . s + V = 0 .
Here once again we see, as in the case of the Schrödinger equation, an extra energy term, Q P , the quantum potential energy, appears. In the present case Q P takes the form
Q P = ( 2 2 R ) / 2 m R 2 8 m [ ( θ ) 2 + sin 2 θ ( ϕ ) 2 ] .
The first term will be recognised as the quantum potential found in the Schrödinger equation. The second term determines the evolution of the spin vector which is given by
s = 1 2 ξ σ ξ = 1 2 ( sin θ sin ϕ , sin θ cos ϕ , cos θ ) .
The equation of motion for the spin vector s , is then found to be
d s d t = T 2 μ ( s × B ) .
Here B is an external magnetic field and
T = ( m ρ ) 1 s × i x i ρ s x i .
It is the quantum torque, T , that acts on the individual atoms, rotating their spin vectors and the flag plane.
4.5. Detailed Calculation of the Quantum Potential
To understand better the role played by the quantum potential, let us examine in more detail its mathematical structure as shown in Equation (20). We restrict our analysis to the case of a single magnet. As the quantum Hamilton-Jacobi Equation (19) is an equation that conserves energy, the appearance of Q implies that some of the kinetic energy of the particle is transferred to the quantum potential energy Q. As we see from Equation (20), the quantum potential energy has two components
Q t r a n s = 2 2 R 2 m R and Q s p i n = 2 8 m [ ( θ ) 2 + sin 2 θ ( ϕ ) 2 ] .
We will examine the two terms independently. First consider Q t r a n s . Since the particle is moving in one-dimension
2 R 2 R z 2 = 2 b d 2 R ( z + u t cos θ ) 2 b d + R 1 4 u 2 t 2 b d sin 2 θ ,
where we have written
b = 2 t 2 4 m 2 σ 4 + 1 and d = 4 σ 2 .
Q t r a n s = 2 2 R 2 m R = 2 b d m 2 b d [ ( z + u t cos θ ) 2 + 2 u 2 t 2 sin 2 θ ] + 1 .
Now we turn to evaluate the spin part of the quantum potential, Q s p i n , where we need to evaluate
ϕ = 2 Δ + 4 u t 2 8 m b σ 4 and θ = sin θ u t b σ 2 .
This gives
Q s p i n = 2 sin 2 θ 8 b m u 2 t 2 σ 4 2 Δ u t 2 2 m σ + 4 Δ 2 2 t 2 4 m 2 σ 4 + 1 .
The expression for the total quantum potential, Q = Q t r a n s + Q s p i n is rather complex so it will be helpful if we can make an approximation without significantly altering the final result. This can be done by noticing the magnitude of b = 2 t 2 4 m 2 σ 4 + 1 1 . This means that we are assuming the wave packet does not spread significantly during the flight times considered. We arrive at the final expression for the total quantum potential:
Q 2 m σ 2 m σ 4 [ ( z + u t cos θ ) 2 + 2 u 2 t 2 sin 2 θ ] + 1 + 2 sin 2 θ 8 m u 2 t 2 σ 4 2 Δ u t 2 m σ 4 + 4 Δ 2 .
4.6. Numerical Details: Quantum Potential Single Stern-Gerlach Magnet
In Figure 6 below we plot the transverse quantum potential Q t r a n s and the spin quantum potential, Q s p i n for the single SG magnet. The end of the SG magnet is again along the z-axis at y = 0, with the atoms flowing along the y-axis out of the page.
The atoms initially experience the first part of the quantum potential where the beam begins to split into two as shown in Figure 2. Both quantum potentials split symmetrically into two parts about the y-axis. The two “domes” of Q t r a n s , shown in the left hand of the figure, cover each beam as they separate. The width of each dome characterises the spreading wave packet as it evolves in time. Also, when compared to the osmotic velocities shown in Figure 3, we can see how these velocities are related to the gradient of Q t r a n s . The trajectories are seen to follow paths of constant gradient and the osmotic velocities are constant along the trajectories in Figure 3. Furthermore, those trajectories in the wings of the wave packets experience a more steep gradient and the osmotic velocities are indeed found to be larger there. At the maximum of the packet, the osmotic velocity is zero. An interpretation of the Q t r a n s would therefore be that it gives rise to a force, which is anti-parallel to the osmotic velocity and restricts the spreading of the wave packet.
The spin part of the quantum potential Q s p i n is shown in the right hand of Figure 6. The upward slope produces the quantum torque that rotates the spin vectors of the atoms as the two beams separate. This rotation continues until the two packets are completely separate. When this happens all the spins point “up” in the upper beam, while they all point “down” in the lower beam. At this stage the Q s p i n 0 ensuring the atoms remain in their final spin eigenstates. Figure 7 shows the projection of the Q s p i n of Figure 6 on the trajectories and spin orientation. Note also that the trajectories close to the y-axis do not experience the same steepness of Q s p i n as do those which are off-axis. This explains why, as remarked earlier, the spin vectors closer to the y-axis take longer to align themselves either up or down.
4.7. Numerical Details: Quantum Potential in Two SG Magnet Case
Now let us consider the case when the two Stern-Gerlach magnets are in place. The positions of the magnets are shown in brown. Recall here that the inhomogeneities in the magnetic fields oppose each other.
In Figure 8 we show both Q t r a n s and Q s p i n for the case of two magnets. The gap in each figure corresponds to the position of the second magnet. The quantum potential after the second magnet is similar to that of the single SG magnet as shown in Figure 6. These results give a detailed picture of the expected evolution of a non-relativistic atom with spin one-half as it goes through both SG magnets.
Figure 9 shows the projection of the spin quantum potential superimposed on the trajectories. Notice that the quantum torque is strongest well outside the second SG magnet in the magnetic field-free region, producing a 180 degree rotation of the spin vector. It is at this point that the wave packets begin to interfere strongly. In fact the quantum torque continues to act outside the magnet until the two wave packets ψ + ( z , t ) and ψ ( z , t ) cease to overlap. Notice once again how the spin does not immediately ‘jump’ into one of the two spin z-eigenstates, but undergoes a well-defined time evolution. Such a behaviour would have, perhaps, been welcomed by Schrödinger himself [39].
Once they no longer overlap, each atom remains in one or the other spin eigenstates. Again, as was the case with the single SG magnet, the spin vectors along the trajectories close to the y-axis, especially at the point where the two beams are refocused, experience less of the gradient of Q s p i n . Thus it is clear that the quantum torque arises from the interference region, implying it is an internal feature of the overall behaviour, suggesting a kind of dramatic re-structuring of the underlying process.
Bohm was intuitively well aware of this possibility and it was one of the reasons why he abandoned the view that the atom only had a local, “rock-like” property. He preferred to regard the atom as a quasi-local region of energy undergoing a new type of process that he described in more general terms as an “unfolding-enfolding” process, comparing it to a gas near its critical point, the particle itself constantly forming and dissolving, as in critical opalescence [40,41]. In other words, the quantum evolution involves an entirely new re-ordering process which should not be regarded as a particle following a well defined trajectory.
This view of the evolving quantum process becomes even more compelling since Hiley and Callaghan [42] and Takabayasi [43] have shown that the local momentum and energy are actually related to the energy-momentum tensor, T μ ν , through the relations
ρ p j ( r , t ) = T 0 j ( r , t ) and ρ E ( r , t ) = T 00 ( r , t ) ,
a feature of which Schwinger [44] was well aware. The question of which particular trajectory a specific atom actually takes cannot be answered because the experimenter has no way of choosing or controlling the initial position of the particle. The final result is also totally independent of the observer. A detailed discussion of the role of the experimenter in the Bohm approach can be found in Bohm and Hiley [45,46]. A more recent paper by Flack and Hiley [47] shows how the Bohm trajectories emerge from an averaging over this deeper process.
We can see from Figure 9, the above simulations predict some interesting structure in near field behaviour of the atoms after they leave the second SG magnet. This could be experimentally explored through weak measurements as suggested in [48]. At present, our group [49] is attempting to measure the weak values of momentum and spin which, if successful, would ultimately enable us to not only construct these flow lines, but also to measure the time evolution of the angle θ ( y , t ) of the spin vector.
We are also exploring the possibility of using the techniques we are developing to check the results shown in Figure 2. At present we are on the edge of what is technically possible and if we are successful, the experiments will show that the quantum potential energy appearing in Equation (19) has an observable experimental consequence and therefore cannot be ignored in analysing quantum phenomena.
5. Conclusions
In this paper we have shown that the differences that are claimed to exist between the standard approach to quantum mechanics and the Bohm approach do not exist when both are applied correctly. Indeed it is hard to imagine how there could be any differences in the predicted experimental results since both approaches use exactly the same mathematical structure. For the type of experiments considered by ESSW [19], the probability current plays a key role. In both approaches the probability current is considered as a particle flow, the conventional approach regarding it as a measure of particles flowing out of a small region, Δ V , of space, whereas the Bohm approach assumes the probability current arises from the velocities of individual particles through the relation j ( r ) / ρ ( r ) = S ( r ) / m = p ( r ) / m . In the Bohm model this is taken as the definition of the local momentum, p ( r ) . Clearly the behaviour of the probability currents is identical to the local momentum. This is what ESSW failed to recognise. Notice that this disagreement arises before the addition of any device to measure which path the particle actually took.
The inclusion of a which-way detector into the discussion merely confuses the issue. Traditionally it is assumed that any measurement to determine which path a particle actually takes brings about the “collapse” of the wave function. Suppose a position measurement is made after the atom has left the second SG magnet as shown in Figure 1. The wave function (1) will not then be the pure state but instead will be a mixture which must be described by a density matrix ρ with ρ 2 ρ . This means there is no interference between the two wave packets ψ + and ψ in which case the particles actually cross the z = 0 plane as shown in Figure 1. Exactly the same thing happens in the Bohm model as was discussed in detail in Hiley [22] and Hiley and Callaghan [23]. We will not repeat the argument again in this paper but refer the interested reader to the original papers. Our conclusion is that the standard quantum mechanics produces exactly the same behaviour as the Bohmian approach so it cannot be used to conclude the Bohm trajectories are “surreal”.
Since these earlier objections were raised, an entirely new way of experimentally constructing the “Bohm particle trajectories” has been developed by Kocsis et al. [1] as discussed in the introduction. Furthermore in the case of atoms the claim that these are “particle trajectories” has been re-examined recently by Flack and Hiley [47] who have concluded that the flow lines, as we shall now call them, are not the trajectories of single atoms but an average momentum flow, the measurements being taken over many individual particle events. In fact they have shown that they represent an average of the ensemble of actual individual stochastic Feynman paths.
The calculations we have presented in this paper provide a detailed background to the experiments of Monachello et al. [49] and Morley et al. [50]. This means that we will not have to rely on theoretical arguments alone to reach an understanding of the behaviour reported in this paper but we hope to be able to provide experimental evidence to further clarify the situation.
Author Contributions
Both authors contributed in the same manner to the research and wrote the paper together.
Special thanks to Bob Callaghan, Robert Flack and Vincenzo Monachello for their helpful discussions. Thanks also to the Franklin Fetzer Foundation for their financial support.
Conflicts of Interest
The authors declare no conflict of interest.
1. Kocsis, S.; Braverman, B.; Ravets, S.; Stevens, M.J.; Mirin, R.P.; Shalm, L.K.; Steinberg, A.M. Observing the Average Trajectories of Single Photons in a Two-Slit Interferometer. Science 2011, 332, 1170–1173. [Google Scholar] [CrossRef] [PubMed]
2. Mahler, D.; Rozema, L.; Fisher, K.; Vermeyden, L.; Resch, K.; Braverman, B.; Wiseman, H.; Steinberg, A.M. Measuring Bohm trajectories of entangled photons. In Proceedings of the CLEO: QELS-Fundamental Science, Optical Society of America, San Jose, CA, USA, 8–13 June 2014; p. FW1A-1. [Google Scholar]
3. Coffey, T.M.; Wyatt, R.E. Comment on “Observing the Average Trajectories of Single Photons in a Two-Slit Interferometer”. arXiv, 2011; arXiv:1109.4436. [Google Scholar]
4. Bohm, D. A Suggested Interpretation of the Quantum Theory in Terms of Hidden Variables I. Phys. Rev. 1952, 85, 166–179. [Google Scholar] [CrossRef]
5. Philippidis, C.; Dewdney, C.; Hiley, B.J. Quantum Interference and the Quantum Potential. Nuovo Cimento 1979, 52, 15–28. [Google Scholar] [CrossRef]
6. Bohm, D. A Suggested Interpretation of the Quantum Theory in Terms of Hidden Variables II. Phys. Rev. 1952, 82, 180–193. [Google Scholar] [CrossRef]
7. Bohm, D.; Hiley, B.J.; Kaloyerou, P.N. An Ontological Basis for the Quantum Theory: II—A Causal Interpretation of Quantum Fields. Phys. Rep. 1987, 144, 349–375. [Google Scholar] [CrossRef]
8. Holland, P.R. The de Broglie-Bohm Theory of motion and Quantum Field Theory. Phys. Rep. 1993, 224, 95–150. [Google Scholar] [CrossRef]
9. Kaloyerou, P.N. The Causal Interpretation of the Electromagnetic Field. Phys. Rep. 1994, 244, 287–385. [Google Scholar] [CrossRef]
10. Flack, R.; Hiley, B.J. Weak Values of Momentum of the Electromagnetic Field: Average Momentum Flow Lines, Not Photon Trajectories. arXiv, 2016; arXiv:1611.06510. [Google Scholar]
11. Bliokh, K.Y.; Bekshaev, A.Y.; Kofman, A.G.; Nori, F. Photon trajectories, anomalous velocities and weak measurements: A classical interpretation. New J. Phys. 2013, 15, 073022. [Google Scholar] [CrossRef]
12. Landau, L.D.; Lifshitz, E.M. Quantum Mechanics: Non-Relativistic Theory; Pergamon Press: Oxford, UK, 1977; p. 2. [Google Scholar]
13. Einstein, A. Albert Einstein: Philosopher-Scientist; Schilpp, P.A., Ed.; Library of the Living Philosophers: Evanston, IL, USA, 1949; pp. 665–676. [Google Scholar]
14. Heisenberg, W. Physics and Philosophy: The Revolution in Modern Science; George Allen and Unwin: London, UK, 1958. [Google Scholar]
15. Jammer, M. The Philosophy of Quantum Mechanics; Wiley: New York, NY, USA, 1974. [Google Scholar]
16. Madelung, E. Quantentheorie in hydrodynamischer Form. Z. Phys. 1926, 40, 322–326. [Google Scholar] [CrossRef]
17. De Broglie, L. La mécanique ondulatoire et la structure atomique de la matière et du rayonnement. J. Phys. Radium 1927, 8, 225–241. [Google Scholar] [CrossRef]
18. De Gosson, M. The Principles of Newtonian and Quantum Mechanics: The Need for Planck’s Constant; Imperial College Press: London, UK, 2001. [Google Scholar]
19. Englert, J.; Scully, M.O.; Süssman, G.; Walther, H. Surrealistic Bohm Trajectories. Z. Naturforsch. 1992, 47, 1175–1186. [Google Scholar] [CrossRef]
20. Scully, M. Do Bohm trajectories always provide a trustworthy physical picture of particle motion? Phys. Scr. 1998, 76, 41–46. [Google Scholar] [CrossRef]
21. Feynman, R.P.; Leighton, R.B.; Sands, M. The Feynman Lectures on Physics III; Addison-Wesley: Reading, MA, USA, 1965; Chapter 5. [Google Scholar]
22. Hiley, B.J. Welcher Weg Experiments from the Bohm Perspective, Quantum Theory: Reconsiderations of Foundations-3, Växjö, Sweden 2005; Adenier, G., Krennikov, A.Y., Nieuwenhuizen, T.M., Eds.; AIP: College Park, MD, USA, 2006; pp. 154–160. [Google Scholar]
23. Hiley, B.J.; Callaghan, R.E. Delayed Choice Experiments and the Bohm Approach. Phys. Scr. 2006, 74, 336–348. [Google Scholar] [CrossRef]
24. Bohm, D. Quantum Theory; Prentice-Hall: Englewood Cliffs, NJ, USA, 1951. [Google Scholar]
25. Landau, L.D.; Lifshitz, E.M. Quantum Mechanics: Non-Relativistic Theory; Pergamon Press: Oxford, UK, 1977; pp. 56–57. [Google Scholar]
26. Mott, N.F. The Wave Mechanics of α-Ray Tracks. Proc. R. Soc. 1929, 126, 79–84. [Google Scholar] [CrossRef]
27. Bohm, D.; Schiller, R.; Tiomno, J. A causal interpretation of the Pauli equation (A). Nuovo Cimento 1955, 1, 48–66. [Google Scholar] [CrossRef]
28. Dewdney, C. Particle Trajectories and Interference in a Time-dependent Model of Neutron Single Crystal Interferometry. Phys. Lett. 1985, 109, 377–384. [Google Scholar] [CrossRef]
29. Dewdney, C.; Holland, P.R.; Kyprianidis, A.; Vigier, J.-P. Spin and non-locality in quantum mechanics. Nature 1988, 336, 536–544. [Google Scholar] [CrossRef]
30. Dewdney, C.; Holland, P.R.; Kyprianidis, A. What happens in a spin measurement? Phys. Lett. A 1986, 119, 259–267. [Google Scholar] [CrossRef]
31. Dewdney, C.; Holland, P.R.; Kyprianidis, A. A Causal Account of Non-local Einstein-Podolsky-Rosen Spin Correlations. J. Phys. A Math. Gen. 1987, 20, 4717–4732. [Google Scholar] [CrossRef]
32. Holland, P.R. The Quantum Theory of Motion: An Account of the de Broglie-Bohm Causal Interpretation of Quantum Mechanics; Cambridge University Press: Cambridge, UK, 1995. [Google Scholar]
33. Penrose, R.; Rindler, W. Spinors and Space-Time; Cambridge University Press: Cambridge, UK, 1984; Volume 1. [Google Scholar]
34. Bell, J.S. Speakable and Unspeakable in Quantum Mechanics; Cambridge University Press: Cambridge, UK, 1987. [Google Scholar]
35. Hiley, B.J.; Callaghan, R.E. The Clifford Algebra approach to Quantum Mechanics A: The Schrödinger and Pauli Particles. arXiv, 2010; arXiv:1011.4031. [Google Scholar]
36. Hiley, B.J. Weak Values: Approach through the Clifford and Moyal Algebras. J. Phys. Conf. Ser. 2012, 361, 012014. [Google Scholar] [CrossRef]
37. Bohm, D.; Hiley, B.J. Non-locality and Locality in the Stochastic Interpretation of Quantum Mechanics. Phys. Rep. 1989, 172, 93–122. [Google Scholar] [CrossRef]
38. Feynman, R.P.; Leighton, R.B.; Sands, M. The Feynman Lectures on Physics III; Addison-Wesley: Reading, MA, USA, 1965; Chapter 5.2. [Google Scholar]
39. Schrödinger, E. Are There Quantum Jumps? Part I. Br. J. Philos. Sci. 1952, 3, 109–123. [Google Scholar] [CrossRef]
40. Bohm, D. The Implicate Order: A New Approach to the Nature of Reality; A Talk Given at Syracuse University; Syracuse University: Syracuse, NY, USA, 1982. [Google Scholar]
41. Bohm, D. A proposed Explanation of Quantum Theory in Terms of Hidden Variables at a Sub-Quantum Mechanical Level. In Observation and Interpretation, Proceedings of the Ninth Symposium of the Colston Research Society, Bristol, UK, 1–4 April 1957; Korner, S., Ed.; Butterworth Scientific Publications: London, UK, 1957; pp. 33–40. [Google Scholar]
42. Hiley, B.J.; Callaghan, R.E. The Clifford Algebra Approach to Quantum Mechanics B: The Dirac Particle and its relation to the Bohm Approach. arXiv, 2010; arXiv:1011.4033. [Google Scholar]
43. Takabayasi, T. Remarks on the Formulation of Quantum Mechanics with Classical Pictures and on Relations between Linear Scalar Fields and Hydrodynamical Fields. Prog. Theor. Phys. 1953, 9, 187–222. [Google Scholar] [CrossRef]
44. Schwinger, J. The Theory of Quantised Fields I. Phys. Rev. 1951, 82, 914–927. [Google Scholar] [CrossRef]
45. Bohm, D.J.; Hiley, B.J. Measurement Understood Through the Quantum Potential Approach. Found. Phys. 1984, 14, 255–264. [Google Scholar] [CrossRef]
46. Bohm, D.; Hiley, B.J. The Undivided Universe: An Ontological Interpretation of Quantum Theory; Routledge: London, UK, 1993. [Google Scholar]
47. Flack, R.; Hiley, B.J. Feynman Paths and Weak Values. Preprints 2018, 2018040241. [Google Scholar] [CrossRef]
48. Flack, R.; Hiley, B.J. Weak Measurement and its Experimental Realisation. J. Phys. Conf. Ser. 2014, 504, 012016. [Google Scholar] [CrossRef]
49. Monachello, V.; Flack, R.; Hiley, B.J.; Callaghan, R.E. A method for measuring the real part of the weak value of spin using non-zero mass particles. arXiv, 2017; arXiv:1701.04808. [Google Scholar]
50. Morley, J.; Edmunds, P.D.; Barker, P.F. Measuring the weak value of the momentum in a double slit interferometer. J. Phys. Conf. Ser. 2016, 701, 012030. [Google Scholar] [CrossRef]
Figure 1. Sketch of Particle Tracks Presented in ESSW.
Figure 1. Sketch of Particle Tracks Presented in ESSW.
Entropy 20 00353 g001
Figure 2. Trajectories with spin vectors immediately on exiting the Stern-Gerlach (SG) magnet.
Entropy 20 00353 g002
Figure 3. The osmotic flow vectors immediately on exiting the SG magnet.
Entropy 20 00353 g003
Figure 4. Spins emerging from two Stern-Gerlach magnets.
Figure 4. Spins emerging from two Stern-Gerlach magnets.
Entropy 20 00353 g004
Figure 5. The osmotic velocity superimposed on the trajectories for two Stern-Gerlach magnets.
Entropy 20 00353 g005
Figure 6. Transverse (left) and spin (right) quantum potential at exit of a single SG magnet.
Entropy 20 00353 g006
Figure 7. Trajectories with spin vectors overlaid on the spin quantum potential immediately on exiting a single SG magnet.
Entropy 20 00353 g007
Figure 8. Q t r a n s (left) and Q s p i n (right) quantum potential for a two SG magnets system.
Entropy 20 00353 g008
Figure 9. Trajectories with spin vectors overlaid on the spin quantum potential for a two SG magnet system.
Entropy 20 00353 g009
Table 1. Parameters used in the numerical investigation.
Table 1. Parameters used in the numerical investigation.
Mass 1.8 × 10 25 Kg
Width of magnets4 and 8 × 10 4 m
Length of magnets1 and 2 × 10 2 m
Velocity of atoms v y = y / t = 500 m/s
Time within magnets Δ t = 2 and 4 × 10 5 s
Magnetic field strength at centre B 0 = 5 Tesla
Magnetic field gradient B 0 = 1000 Tesla/m
Wave packet width σ = 1 × 10 4 m
Wave packet speed u = μ B B 0 Δ t / m = 1 m/s
Δ = μ B B 0 Δ t = m u / Δ = 1.717 × 10 9 m 1
Back to TopTop |
936b86399ca61e80 | click links in text for more info
Energy level
A quantum mechanical system or particle, bound—that is, confined spatially—can only take on certain discrete values of energy, called energy levels. This contrasts with classical particles; the term is used for the energy levels of electrons in atoms, ions, or molecules, which are bound by the electric field of the nucleus, but can refer to energy levels of nuclei or vibrational or rotational energy levels in molecules. The energy spectrum of a system with such discrete energy levels is said to be quantized. In chemistry and atomic physics, an electron shell, or principal energy level, may be thought of as the orbit of one or more electrons around an atom's nucleus; the closest shell to the nucleus is called the "1 shell", followed by the "2 shell" the "3 shell", so on farther and farther from the nucleus. The shells correspond with the principal quantum numbers or are labeled alphabetically with letters used in the X-ray notation; each shell can contain only a fixed number of electrons: The first shell can hold up to two electrons, the second shell can hold up to eight electrons, the third shell can hold up to 18 and so on.
The general formula is. Since electrons are electrically attracted to the nucleus, an atom's electrons will occupy outer shells only if the more inner shells have been filled by other electrons. However, this is not a strict requirement: atoms may have two or three incomplete outer shells. For an explanation of why electrons exist in these shells see electron configuration. If the potential energy is set to zero at infinite distance from the atomic nucleus or molecule, the usual convention bound electron states have negative potential energy. If an atom, ion, or molecule is at the lowest possible energy level, it and its electrons are said to be in the ground state. If it is at a higher energy level, it is said to be excited, or any electrons that have higher energy than the ground state are excited. If more than one quantum mechanical state is at the same energy, the energy levels are "degenerate", they are called degenerate energy levels. Quantized energy levels result from the relation between its wavelength.
For a confined particle such as an electron in an atom, the wave function has the form of standing waves. Only stationary states with energies corresponding to integral numbers of wavelengths can exist. Elementary examples that show mathematically how energy levels come about are the particle in a box and the quantum harmonic oscillator; the first evidence of quantization in atoms was the observation of spectral lines in light from the sun in the early 1800s by Joseph von Fraunhofer and William Hyde Wollaston. The notion of energy levels was proposed in 1913 by Danish physicist Niels Bohr in the Bohr theory of the atom; the modern quantum mechanical theory giving an explanation of these energy levels in terms of the Schrödinger equation was advanced by Erwin Schrödinger and Werner Heisenberg in 1926. In the formulas for energy of electrons at various levels given below in an atom, the zero point for energy is set when the electron in question has left the atom, i.e. when the electron's principal quantum number n = ∞.
When the electron is bound to the atom in any closer value of n, the electron's energy is lower and is considered negative. Assume there is one electron in a given atomic orbital in a hydrogen-like atom; the energy of its state is determined by the electrostatic interaction of the electron with the nucleus. The energy levels of an electron around a nucleus are given by: E n = − h c R ∞ Z 2 n 2, where R∞ is the Rydberg constant, Z is the atomic number, n is the principal quantum number, h is Planck's constant, c is the speed of light. For hydrogen-like atoms only, the Rydberg levels depend only on the principal quantum number n; this equation is obtained from combining the Rydberg formula for any hydrogen-like element with E = h ν = h c / λ assuming that the principal quantum number n above = n1 in the Rydberg formula and n2 = ∞. The Rydberg formula was derived from empirical spectroscopic emission data. 1 λ = R Z 2 An equivalent formula can be derived quantum mechanically from the time-independent Schrödinger equation with a kinetic energy Hamiltonian operator using a wave function as an eigenfunction to obtain the energy levels as eigenvalues, but the Rydberg constant would be replaced by other fundamental physics constants.
If there is more than one electron around the atom, electron-electron-interactions raise the energy level. These interactions are neglected if the spatial overlap of the electron wavefunctions is low. For multi-electron atoms, interactions between electrons cause the preceding equation to be no longer accurate as stated sim
Almere Poort
Almere Gate (. It is the newest part of what is a new city itself, with the first building completed only in 2005. Although Almere is a planned city, Almere Poort was not in the original city plans, but is rather a result of revised urban planning in accordance to Almere's more recent development plans assuming much higher target population and more prominent role as a satellite urban centre to Amsterdam. Almere Poort is located on the western bank of the IJmeer, with the Almeerderstrand beach forming the borough's western boundary, it borders Almere Haven to the south, Almere Stad to the east and the yet-undeveloped designated district of Almere Pampus to the North. The A6 motorway runs along the borough's southern border, with one exit at Poortdreef and one at Hogering, the latter of which runs along the eastern border; the Flevolijn railway line runs semi-diagonally across the borough, with local trains stopping at the borough's station Almere Poort railway station. The former Almere Strand railway station, serviced only during events at Almeerderstrand, was within the boundaries of Almere Poort.
It was demolished in 2012. Since buses run between Almere Poort station and Almeerderstrand during events. Almere Poort differs from the other boroughs and districts in that residential development does not follow the model typical for other districts of Almere, large-scale single-family housing developments built according to a common plan. Contrary to this, the district features both high-density apartment buildings, as well as land parcels available for free private housing development. Contrary to older districts of Almere, Almere-Poort refers to its residential areas as kwartieren rather than wijken; the designated parts of Almere Poort according to the official website of the municipality are: Homeruskwartier - featuring land plots for private housing development, with streets named after mythological beings and characters featured in the works of Homer Europakwartier - consisting of large-scale apartment buildings with an urban atmosphere, divided by the railway tracks into Europakwartier-West and Europakwartier-Oost.
The street naming after European countries follows the east-west geographic location of countries Columbuskwartier - this residential district has streets named after famous explorers such as Christopher Columbus himself Olympiakwartier and Olympia Office Park are to be constructed around the Almere Poort railway station to form the district's multi-functional and prestigious centre. Duin which will feature beach condo's and sand hills close to the beach of Almeerderstrand and Marina Muiderzand. Cascadepark - a park between Homeruskwartier and Europakwartier with its first trees planted in 2008 Kustzone - the coastal zone along the Almeerderstrand is designated for mixed development, including residential buildings, leisure facilities and some commerce Hogekant - this commercial area in the Western extremity of the district is designated for industrial use by small and mid-sized enterprises in the craft and service sectors Middenkant - separated with a canal from the Hogekant is to provide a more high-quality neighbourhood for SMEs Lagekant - this commercial area is designated for large and international companies in the service sector Almere Poort's section of the official webpage of the municipality of Almere
Morten Stræde
Morten Stræde is a Danish sculptor. He attended the Royal Danish Academy of Fine Arts from 1978–1985 and became a professor there. Morten Stræde had his breakthrough in the 1980s. In 2011, he created three new urban spaces in the Nørrebro borough in Copenhagen. Stræde was awarded the Premio Internazionale di scultura in 1999 and received the Eckersberg Medal in 2000, he has received the Thorvaldsen Medal. In its appraisal of Stræde's work, the academy noted his ability to assess the history of the place and the physical location where his public art is to be displayed. An example mentioned by the jury is Ulisse. Stræde is represented at the following art museums: National Gallery of Denmark Kobberstiksamlingen ARoS Aarhus Kunstmuseum Herning Museum of Art Horsens Museum of Art Kunstmuseet Køge Skitsesamling Vestsjællands Museum of Art Kunstmuseet Trapholt Esbjerg Art Museum Vejle Museum of Art Gothenburg Museum of Art, official site
National Geographic Endeavour
MS National Geographic Endeavour was a small expedition ship operated by Lindblad Expeditions for cruising in remote areas the polar regions. The ship was a fishing trawler built in 1966 as Marburg, converted to carry passengers in 1983. First named North Star Caledonian Star. On March 2, 2001, the ship was struck by a 30-metre-high rogue wave while crossing the Drake Passage, she was assisted by the Argentine Navy ocean fleet tug ARA Alferez Sobral and reached Ushuaia three days later. When National Geographic Endeavour was retired, the piano was donated to the Tomas de Berlanga school in Santa Cruz, Galapagos; the bridge ceiling, notched with polar bear sightings from her time in Svalbard, is with Sven Lindblad. The model, valuable art and other mementos were saved, some items were transferred to National Geographic Endeavour II. National Geographic Endeavour was scrapped on 6 May 2017. Details of the ship
Cunningham-Hall PT-6
The Cunningham-Hall Model PT-6 was an American six-seat cabin biplane aircraft of the late 1920s and was the first design of the Cunningham-Hall Aircraft Corporation of Rochester, New York. The Cunningham-Hall Aircraft Corporation was formed in 1928 and the first design was the PT-6, which first flew on April 3, 1929, it was flown to the Detroit Aircraft Show two days with minor alterations being made including a switch from a tailskid to a tailwheel. The PT-6 was a cabin biplane with an all-metal structure, stressed to meet military strength specifications rather than the much more lenient commercial requirements, however aside from the cabin, covered with corrugated aluminum, most of the airframe was fabric covered, it had a fixed landing gear with a tail wheel. The cockpit held a pilot and either a copilot or passenger, with a separate cabin for four passengers; the aircraft was powered by a 300 hp Wright J-6-9 Whirlwind radial engine. The company's final aircraft was a freighter conversion the PT-6F.
Built during 1937 and flown in 1938, the passenger cabin was modified as a cargo compartment with 156 cu ft of stowage space, an NACA cowling was fitted, along with a variable-pitch propeller. A freight door was fitted to a loading hatch fitted in the roof, it was powered by a Wright R-975E-1 radial engine of greater power. Only two PT-6s and one PT-6F were registered, however as many as six of each type may have been built; the discrepancy from many publications with higher numbers may indicate that from two to nine additional airframes were built, but scrapped without being registered or sold, due to the collapse of the aviation market with the deepening of the Great Depression. A production line had been set up, materials bought to produce 25 examples. Plans for a smaller 4-seat derivative to be named the PT-4, an armed military variant were cancelled. One example was used for charter flying by the Rochester - Buffalo Flying Service fitted with skis or floats. One customer was the Fairchild Aviation Corporation.
George Eastman of Kodak had his first flight in PT-6 The PT-6F was supposed to have been one of three built from parts still available from the original cancelled production run, for an expected Philippine customer, carried the Philippine registration of NPC-44, however a lack of funds caused that sale to be cancelled. The aircraft was sold for around $7,000, made its way to Alaska for a career as a bush plane with Byers Airways. PT-6 Six-seat cabin biplane powered by a 300 hp Wright J-6 Whirlwind radial engine. PT-6F Freighter version of the PT-6. PT-4 Cancelled 4 place version. PT-6 Bomber Cancelled bomber with turret. PT-6F s/n 381 NC444 NC16967 and NPC44 was restored to airworthy condition and as of 2008 was on display at the Golden Wings Museum at Anoka County-Blaine Airport, near Minneapolis. PT-6 s/n 2962 NC692W was restored to display-only status and cannot be flown, is at the Alaska Museum of Transportation and Industry in Wasilla and has been listed on the National Register of Historic Places.
Data from Aero Digest and JuptnerGeneral characteristics Crew: Two Capacity: Four Length: 29 ft 8 in Upper wingspan: 41 ft 8 in Upper Chord: 78 in Lower wingspan: 33 ft 8 in Lower Chord: 54 in Height: 9 ft 7 in Wing area: 378 sq ft Airfoil: Clark Y Empty weight: 2,670 lb Gross weight: 4,350 lb Fuel capacity: 90 US gal Oil capacity: 6 US gal Powerplant: 1 × Wright J-6-9-300 Whirlwind 9 cylinder air-cooled radial engine, 300 hp Propellers: 2-bladed metal propeller. Performance Maximum speed: 136 mph Cruise speed: 115 mph Stall speed: 40 mph landing speed: 45 mph Range: 690 mi Endurance: 6 hours Service ceiling: 17,500 ft Rate of climb: 900 ft/min initial rate Time to altitude: 1.5 minutes to 2,000 ft 5 minutes to 6,000 ft Wing loading: 10.5 lb/sq ft Power/mass: 0.08 hp/lb Eckland, K. O.. "Aircraft Cu to Cy". Retrieved 28 January 2020. George F. McLaughlin, ed.. "Cunningham-Hall biplane". Aero Digest. Vol. XIV no. 5. New York City: Aeronautical Digest Publishing Corp. pp. 106 and 108.
Hall, Randolph F.. "Cunningham-Hall Aircraft Corp. Story". AAHS Journal. American Aviation Historical Society. Pp. 90–97. Herrick, Greg. "Sole Survivor - Amazing restoration of the last Cunningham-Hall PT-6F biplane". Air Classics. Vol. 33 no. 10. Pp. 14-18 and 68-70. Juptner, Joseph P.. U. S. Civil Aircraft Vol. 2. Los Angeles, CA: Aero Publishers, Inc. pp. 220–221. LCCN 62-15967. George F. McLaughlin, ed.. "Cunningham-Hall biplane". Aero Digest. Vol. XIV no. 5. New York City: Aeronautical Digest Publishing Corp. pp. 106 and 108. Theobald, Mark. "Jas. Cunningham, Son & Co". Retrieved 3 February 2020. National Park Service. "National Park Service - Alaska - National Register of Historic Places". Retrieved 3 March 2020
Sara Poidevin
Sara Poidevin is a Canadian professional racing cyclist, who rides for UCI Women's Continental Team Rally Cycling. She raced mountain bikes before switching to road racing in 2013. 2016 1st Young rider classification Cascade Cycling Classic2017 1st Overall Colorado Classic 1st Points classification 1st Mountains classification 1st Young rider classification 1st Stage 2 2nd Overall Cascade Cycling Classic 1st Mountains classification 1st Young rider classification 1st Stage 52018 1st Young rider classification Tour Cycliste Féminin International de l'Ardèche 2nd Overall Tour of the Gila 1st Young rider classification 7th Overall Tour of California 1st Young rider classification List of 2016 UCI Women's Teams and riders Sara Poidevin at ProCyclingStats |
f69d6ff7a66cfa8c | Parallel Worlds Probably Exist. Here’s Why
Published on March 26, 2020
Captions provided by CCTubes – Captioning the Internet! The most elegant interpretation of quantum mechanics is the universe is constantly splitting
A portion of this video was sponsored by Norton. Get up to 60% off the first year (annually billed) here: or use promo code VERITASIUM
Special thanks to:
Prof. Sean Carroll
His book, a major source for this video is ‘Something Deeply Hidden: Quantum Worlds and The Emergence of Spacetime’
I learned quantum mechanics the traditional ‘Copenhagen Interpretation’ way. We can use the Schrödinger equation to solve for and evolve wave functions. Then we invoke wave-particle duality, in essence things we detect as particles can behave as waves when they aren’t interacting with anything. But when there is a measurement, the wave function collapses leaving us with a definite particle detection. If we repeat the experiment many times, we find the statistics of these results mirror the amplitude of the wave function squared. Hence the Born rule came into being, saying the wave function should be interpreted statistically, that our universe at the most fundamental scale is probabilistic rather than deterministic. This did not sit well with scientists like Einstein and Schrödinger who believed there must be more going on, perhaps ‘hidden variables’.
In the 1950’s Hugh Everett proposed the Many Worlds interpretation of quantum mechanics. It is so logical in hindsight but with a bias towards the classical world, experiments and measurements to guide their thinking, it’s understandable why the founders of quantum theory didn’t come up with it. Rather than proposing different dynamics for measurement, Everett suggests that measurement is something that happens naturally in the course of quantum particles interacting with each other. The conclusion is inescapable. There is nothing special about measurement, it is just the observer becoming entangled with a wave function in a superposition. Since one observer can experience only their own branch, it appears as if the other possibilities have disappeared but in reality there is no reason why they could not still exist and just fail to interact with the other branches. This is caused by environmental decoherence.
Schrodinger’s cat animation by Iván Tello
Wave functions, double slit and entanglement animation by Jonny Hyman
Filming of opening sequence by Casey Rentz
Special thanks to Mithuna Y, Raquel Nuno and Dianna Cowern for feedback on the script
Music from “Experimental 1” “Serene Story 2” “Seaweed” “Colorful Animation 4”
View More »
Category Tags:
CCTubes - get your videos captioned! |
1c6bd8ae7c68ef9b | Top movie review ghostwriting service for mba
White presentation folders cheap wedding cake
White presentation folders cheap wedding cake
Production for Cloud Atlas began in September 2011 at Babelsberg Studio in Potsdam-Babelsberg, Germany. You can just keep them on the gadget or on soft file in your computer to always approach the room at that time. Shakespeare also used the Pastoral genre in As You Like It to cast a critical eye on social practices that produce injustice and unhappiness, and to make fun of anti-social, foolish and self-destructive behaviour, most obviously through the theme of love, culminating in a rejection of the notion of the traditional Petrarchan lovers. Proposition 83, also known as Jessica s Law was made by the parents of Jessica Lunsford. Write down your own answer to the question. In order to advocate public policy, therefore, a system of social or political ethics must be constructed. To tell you the truth, I like the uniforms. Meanwhile, up in Hilbert space or configuration space two choices for the supposedly abstract space of quantum mechanics the quantum state chugs merrily along locally since it is governed by the Schrödinger equation, a local differential equation. As we ll see in the command line, the model generator generates a number of files for us. America now possessed the heart of their continent. Coverage Date The date or dates that the information in the document pertains to often not the same as the field date.
White presentation folders cheap wedding cake
Transitioning from the world of high school IEPs to private public institutions of higher learning. The Company has zero tolerance for violence of any kind. This question has become common among students. The Kalinga tribes are perhaps the most diplomatic of all the Igorot. While all companies try put a lot of efforts in achieving the set goals, very few actually invest in setting the right ones. The capitals do not stand out of letters of dimishing size. Groups such as the Virginia Herpetological Society and the Loudoun Wildlife Conservancy help by slowing traffic and physically moving amphibians on these critical nights. Although the negative points might never outnumber the importance of extracurricular activities, one should not neglect the bane of such. As late at 2009, India was the world s second-largest consumer and third-largest producer of tobacco Economist, 2009, p. James Parkinson, a British physician first coined this disorder as shaky palsy in 1817 and included clinical manifestations of this disease involving tremors with muscle weakness, bradycardia, rigidity, unstable posture, and gait abnormality that becomes worse. Even if you think of it as a personal failure, explain what that experience helped you with. The diagnosis and rational decision of sex assignment must rely on the determination of genetic sex, the hormonal determination of the specific deficient enzyme, and an assessment of the patient s potential for future sexual activity and fertility. This step is very challenging for many students, but it s one of the most important strategies used in successful essays. 80 This hypothesis is generally less accepted than the previous hypothesis, but nonetheless provides a possible alternative. Despite the professional opportunities and rewards available at PwC, I do not believe that I can reach my full potential intellectually, academically, or professionally in an accounting firm. One minute later, the ground beneath you begins to shake. Lifestyles and livelihood are mainly driven by certain crucial factors such as Desires, Needs, Influencers and Motivators. Uploaded by Pandora Uploaded on 04 01 2013 Subject Philosophy. The idea seems to be that what makes a subject S better off at time t is in S s interests-at-time- t. Being thus sublates itself because the one-sidedness of its moment of understanding undermines that determination and leads to the definition it has in the dialectical moment.
Nevertheless, all possible efforts will have to be made to save us from total darkness and helplessness. Use a comma to separate more than two verbs. Civilizations of Ancient China and East Asia Essay. Garvin, LowCountry offers up a rotating list of Southern staples including fried steak and gravy, catfish po boy sandwiches, burgers and a variety of sides. Like many of the movements, Dada included writing, painting and poetry as well as theatre. Despite what seems an enormous amount of progress in computer hardware, general computing and even the computing available to most design studios is just not fast enough to easily reproduce art on the scale and level of detail possible with traditional media. Special Operations Forces Tele-training System SOFTS SOFTS takes advantage of proprietary and commercial off-the-shelf technology to deliver real-time language culture training to students anywhere in the world, including those who are unable to attend traditional classes at traditional training institutions. Evidence is provided to support main points. One and a half centuries after 1848, we have learned to value and show the colors of our flag as a sign of our democratic nation, the daily Die Welt editorialized after the abrupt ubiquity of the flag became a news story. This is most ev In contrast, an email arrives almost instantaneously and can be read seconds after it was sent. These were clustered based on their mean color in the Lab color space rather than RGB. It s even teamed up with Olympian Aly Raisman to fight homelessness. Trust that we will each ask for help when we feel scared, panicked, or desperate. Altrichter, Herbert 1991 Do we need an alternative methodology for doing alternative research. Evolutionary analyses of TFBS consensus sequences at in vivo-bound sites have delivered an additional surprise, demonstrating that in some cases they are no more evolutionary conserved than the flanking sequence, even at transcriptionally active regions 6 9. His plant was suffered because it was running with neither profitability nor productivity. It is not the most subtly layered documentary I ve ever seen, but these days it s no longer verboten to take a stance in docs. As my adviser explained, it was important to ensure my parents were aware of the burden they had to carry due to my interest in FIDM. The first characteristic feature of liberal democracy is an elected legislature, sometimes with an elected head of state. Here are some examples sometimes a few short words sound better than one long word.
Presentation cheap wedding folders cake white
Compounding 1 Definition Compounding is a process of word formation by which two or more stems are put together to make one word. Memoir Prompt Have you ever run for public office. Do you keep your wits about you and deal calmly with the situation. Bassanio in Merchant of Venice by William Shakespeare Throughout the play, Bassanio s main focus has been his quest to Belmont in bid to attempt and succeed in the casket challenge laid by Portia s father. Unlike group homes, single or double dwelled homes of the developmentally disabled adults are impacted by cost of living, staffing procedures, and required attention. Spreadsheet outline : A less known method, which makes it possible to simply compare sections regarding their size and text by diving every paragraph into the clear parts. Whether or not you believe the veracity of Willie Lynch speech or not, the outcome is real Creating mental barriers or physical barriers to resources turned us against one another. Ethnic separatist movements include the following. In conclusion Finally As a result (of) In summary Therefore To sum up In other words To summarize Then In brief On the whole To conclude As we have seen As has been said. The first ten days of each semester are an opportunity to visit a number of classes to determine which are most interesting to you. Essay on Women 100 Years Ago and Women Today.
White presentation folders cheap wedding cake
In addition to being mocked, Osric also serves as foil to the hero Hamlet, contrasting with his genuine wit and honesty Draudt, 2002. Take not going to college into perspective. Fils et petit-fils de qâdî juge en matières civiles, judiciaires et religieuses de la Grande Mosquée, juriste et médecin, c est avant tout comme philosophe, et plus particulièrement comme commentateur d Aristote, qu il passa à la postérité, exerçant une influence considérable sur le monde latin. Kansas Distinguished Scholarship Program Created to encourage Brasenose, Chevening, Fulbright, Madison, Marshall, Mellon, Rhodes and Truman scholars from Kansas to continue graduate studies at Kansas public universities. If two are combined, they are disaccharides; if more than two are combined, they make up a large molecule called a polysaccharide. The main thing to remember with any research paper is that it is based on an hourglass structure. And all of it comes just as the developing world is starting to desire U. In addition, nativist sentiment has been present throughout American history as a product of isolationism and, among other factors, wage depression and fear. Since steroid and HGH use only helps those who are willing to work the drugs alone don t build muscle, some argue that they re not all bad.
Since Sarbanes-Oxley, there has been a sleepy provision of the criminal code that could present an end-around to the morass of insider trading precedents under Rule 10b-5. Cultural senstivity - monitoring our verbal and nonverbal behaviors so as to respect and be sensitive to cultural norms, values, and meanings. Always keep original documents in your file. Premium Concealed carry in the United States, Crime, Firearm 806 Words | 3 Pages. He said, among other things, that if our Republic had no other meaning than to guarantee all citizens equal rights, it would have just cause for existence. Some people would actually call this food quite toxic. They can be categorised as natural causes of global warming and man-made causes of global warming. To view a list of Honors scholarships, grants, and awards, click Successful responses to tgekking prompt will name and address the value of any identified forces, as well as how they influenced how you think, work, or act.
Clarity 11 Transparent presentation of musical structure. Transition words show the relationship between ideas. There s so many good things, and positive ways of looking at my life. Authority, which is defined as the power or right to give orders, make decisions, and enforce obedience, is key to a position of power, but the two roles differ in how they achieve this right and the ways they continue to hold on to it. Same word in every line, or in a certain place in every paragraph, etc. Making everyone wear the same school uniform infringes on goes against our rights and is a misuse of authority. It is highly unlikely that they will have you discriminate between the two on the exam given their similarities. In the UK, the main Christmas Meal is usually eaten at lunchtime or early afternoon on Christmas Day. If you re not happy with their work, which rarely happens, you get your money back. What are the Prince2 Certification Requirements for this level. The Pagan movement, by asserting that sexuality and pleasure are sacred, stands as an important counterbalance to repressive religions. 5 of the total outlay was allocated for education. The Baroque Period 1600-1750 was a revolutionary period for music. This theory is considered as a science of the behavior of each employee. Therefore, looking back at history, it is noticeable that the British affected the natives negatively and positively, and has also left a trademark on the culture today that can be found in New Zealand. Modes of decay n Neutron emission p Proton emission Bold symbol as daughter Daughter product is stable. Only her lady-in-waiting knows that she loves Rodrigue. Although both of the stories are very similar, they also are very different, too.
White presentation folders cheap wedding cake
For example, the vast exposure other cultures have of Coca-Cola could affect their own ideals, resulting in the adoption of America s ideals. One of the more accessible places to find their view is in George Grants volume in Gary Norths Biblical Blueprints Series. But he didn t have to face NT-Unit As long as he could hold off it infecting Unit, NT-Unit would run out of power, and that would be that. It could be anything from a lesson you learned from experience to a story of how an object impacted your life. Elie Wiesel thinks that memory should stir people to act against injustice, instead of just tuning it out and going on with their lives. Poe and Hawthorne s literary genre of Dark Romanticism opposes human perfectibility, and both writers employ symbolism, irony, similar characters and plot to convey the theme that obsessions will inevitably lead to destruction. Exuberance seems like one of those innate traits that stable from birth, either you re exuberance or you re not. The Souls of Black Folk Non Fiction Book, 1903. Until the last two chapters it is told from the view point of Mr Utterson; a friend of Jekyll s who is trying to piece together the story. I grew up here I grew up in Rahway, New Jersey with my mother, step-father and a little annoying brother that I loved. I took something in the morning to get me going and something at night to help me sleep. Would you tell your Black friends to stop with seeing themselves as Black or African-American. Since you have a confirmation added to your collection, the result of long periods of diligent work and commitment, you realize you can go up against any test and finish it with energy.
Leadership Title The role of leadership theory in raising the profile of Women in Management. The Brambell Report of 1965 recommended that animals should have the freedom to stand up, lie down, turn around, groom themselves and stretch their limbs. This term is easy to recall if you remember that something that is therapeutic is done to help a person cope with a situation and ultimately feel happier and more relaxed. When I was younger, cooking was always associated with the holidays, which was the prime time for me to mix chocolate with various other gooey ingredients, and with the help of my mother, create a delicious dessert. Localized Discomfon, syncope or a hematoma at the site of needle puncture may occur as a result silent hero essay titles the Daily venipuncture. Psychometric tests are used in recruitment because companies want a means of fairly and accurately predicting which applicants are likely to be successful in a particular job. They re afraid that Temple University will look down on too many attempts to raise your score. As a single parent with inconsistent child support, the Carol E. 77 crore, the expenditure incurred is Rs. On July 17, 1918, the family, their dogs, and their servants were told to assemble in the basement.
The country has high rates of alcoholism, and 70 percent of women in the country say it s OK for their husbands to hit them if they burn dinner. Of his grandson Hasan we read that his vagrant passion gained for him the unenviable sobriquet of The Divorcer ; for it was only by continually divorcing his consorts that he could harmonize his craving for fresh nuptials with the requirements of the divine law, which limited the number of his free wives to four. Diaz s explains in his short story different stereotypes on young adult and dating. La remarque vaut aussi bien pour lobjet que pour les acteurs. The 26th Amendment passed and ratified in 1971 72 prevents states from setting a voting age higher than 18. The notation expresses the presence of the standard like BPMN 2. Astronomical wealth disparity and its attendant link to violence is something we share with Brazil even as we rape that nation s rain forest. Music dissertation ideas like this are quite interesting but due to celebrity status of these artists the information are quite conflicting and contradictory therefore hypothesis can be used, especially for music doctoral candidate. If the DOI does not exist here, the article most likely does not have one; in this case, use a URL instead. He approaches the subject of his inquiry free form all presuppositions, and tries to understand the organic structure of a religious system, just as a biologist would study a form of life or a geologist a piece of mineral. Definiiton Scotland and Wales, There are minority nationalist parties.
The rights and duties of each citizen are very priceless and connected to each other. You will have to move out of housing and may face extra costs if you withdraw from school before the end of the semester. The change is accounted for prospectively by simply depreciating the remaining depreciable base of the asset book value at date of change less estimated residual value over the revised remaining service life. The history of a symbol is one of many factors in determining a particular symbol s apparent meaning. Rupert s cavalry how can knowledge open doors essay contest was the strongest arm of the King s service. I would like to experience other business areas and markets and would appreciate owning a strategic role in these areasmarkets. Useful information about Urdu phrases, expressions and words used in Pakistan in Urdu, or Pakistani conversation and idioms, It s 10 o clock. When introducing a new paragraph, your transition should. His admiration of Gatsby in having an extraordinary gift for hope, a romantic readiness he had never found in any other person and which it was not likely he could ever find again Fitzgerald 1 overpowered his questions on Gatsby s character and that of his company. And remember, tightening nearly always adds power. Dec 06, В В Is Stanley Kowalski simply a tragic villain. This reservation encompasses the four corners of the United States, which includes portions of New Mexico, Colorado, Utah, and Arizona. There are several ways of structuring a literary analysis, and your instructor might issue specific instructions on how he or she wants this assignment done. Most parents are engaged in busy working schedules such that they have no time to connect with their children.
L ordre de Cluny connaît son apogée aux XIe et XIIe siècles. So the biggest priority for most art institutions in the United States in the next few years is to implement a digital age shift in their business model. 8 Simple chunks of information can be recalled without having to go through long term memory, such as the sequence ABABAB, which would use working memory for recollection. Sampling and analysis can then determine if human activities have contaminated the environment or caused harmful reactions to affect it. Dans l extrait du Bourgeois gentilhomme, le contraste entre les deux chansons, l une qui ressort de la lyrique amoureuse et l autre de la chanson populaire proche du farcesque, permet d exploiter le comique de mots par exemple, la réplique il y a du mouton dedans. Once again, the view of women as mere objects of hospitality can be derived from the traditional Christian view God clearly praises male values in fighting, war and strength. In the 1940s ecologist Aldo Leopold penned his now famous essay Thinking like a Mountain. In particular, please note that this is an essay prize and entries should conform to the essay format i. I think that Kalolina is mistaken to invoke complexity theory as an argument in favour of studies which cannot be generalised, or can only be generalised within a narrow temporalgeographical or cultural range. These Indiana University college application essays were written by students accepted at Indiana University.
Finally, take programs require the resident assistant to accompany residents to an event which can include an on-campus program such as an institution sponsored event, a basketball game or perhaps a movie. Under Captain Mayr national thinking courses were arranged at the Reichswehrlager Lechfeld near Augsburg, 15 with Hitler attending from 10 19 July 1919. This does not include the number of decimal places to which an excel spreadsheet works. Dans ce contexte, l UE a reconnue la valeur potentielle de la Turquie comme une route relativement sûre et indépendante pour l offre énergétique. Although the religious doctrine allows the sacrifice anytime over a period of three days starting from the Eid day, most people prefer to perform the ritual on the first day of Eid. The Philippines also had the highest rank among the Asian countries on the Global Gender Gap Index, implying that Philippines is an egalitarian state, equal rights and opportunities from education to employment. As a medium, the radio has a very broad spectrum in our lives, especially our daily lives. Rejecting the explanation offered him by the presbyters, he broke off the interview with a threat to make a schism in their church. Find a work of art that represents the story s setting for you. The effect is unsettling; something we thought dependable is revealed as anything but. When I become established within my profession it will be essential for me to create goals for me and my teammates to be able to achieve and excel. Anatolia traditionally was a land with Greek Christian population. The destruction of the Abame village is Achebe s first signal to readers that the people of the Igbo may never be the same. Gray Mirror is a four-panel painting from 1991 see Plate 3.
This is one of the main discussions we had in class and now that we are studying Napoleon and the French Revolution we will base our thought on our research. Note that copyleft infringement is possible. Act One Scene Two The use of instructional language is effective in accentuating Edmund s influence over his father. Interestingly, a minor component B1 of gentamicin has been shown to provide effective readthrough activity 52, suggesting that an enrichment of this component of gentamicin or modified aminoglycosides with reduced toxicity may hold promise for the future 23. As described with reference to the preceding criticism, brain fingerprinting accurately and objectively detects whether certain specific information is or is not stored in a subject s brain. The education requirements comprise two major components. As the conflict evolves, people alternate playing roles like the hero, the victim, and the villain in each situation. Example 4 Rime to the Ancient Mariner By Samuel Taylor Coleridge. A 2012 study found that at least 800,000 minors had been harassed on Facebook. If you feel like law essays are a waste of your time, you re probably right you should be spending more time focusing on the important issues in your law career, not on writing essays. The whole place is now inundated with a frenzy of people. It is well known that practically every one of them had several branches, many of which were quite large. Using a celebrity Eminem to get the message across really touches the young generation who will lead the society in 10-20 years. This restriction in movement results in delayed motor development. The moving party has the burden of establishing the absence of a genuine issue of material fact.
Description Annual awards for female undergraduate or graduate students enrolled full-time at the University of Florida College of Liberal Arts and Sciences majoring in Humanities Classics, English, History, Philosophy, Religion, and Languages, Social Sciences Anthropology, Communication Sciences and Disorder, Geography, Political Science, Psychology, and Sociology, Individual Interdisciplinary Studies, or Women s Studies. Le plan, bien quen deux parties, est acceptable car les arguments sont bien illustrés à laide dexemples bien choisis. Uncontrolled discharge of wastes and improper sanitation also contribute to water pollution. Pandit Jawaharlal Nehru is not only known for his political career or for the service of the country but was also famous among children. O virgins, o demons, o monsters, o martyrs. Ardis Rewerts Memorial Scholarship Jack S. This article investigates the use of swearing in Philip Larkin s poetry in relation to English class struggle. There is an extensive discussion in the scientific literature on what policies might be effective in responding to climate change. The term school culture generally refers to the beliefs, perceptions, relationships, attitudes, and written and unwritten rules that shape and influence every aspect of how a school functions, but the term also encompasses more concrete issues such as the physical and emotional safety of students, the orderliness of classrooms and public spaces, or the degree to which a school embraces and celebrates racial, ethnic, linguistic, or cultural diversity. China is rapidly changing and the government and people look to cultural identity as a means of creating a stable and common set of meanings to help bind the people and nation through this transition. Top Benefits of Buying Locally Grown Food. The tricksy sprite and the candles (brought by Betty) need no explanatory words of mine. On the contrary, becoming a published and relatively well-received author was entirely new to her. Subsequent paragraphs provide supporting detail that shows an understanding of facts andor opinions.
1. lesoh vufe
Young Country Girl Dancing, black, red and white chalk and stump on paper.
2. kenagix
The Accident That Ripped Out My Life in an Instant. Taking place in Verona, an ignorant Romeo first meets a childish Juliet at the Capulet's. In the reflective essay, you will be expected to think about your experiences during a period or an event, indicating the lessons learned in the process.
• necuv xozi
Paying attention to your ABCs You can boost your wellness by focusing on. My aim in life essay for 2nd year ations,my aim.
Leave a Comment |
a9b44aaaa5093ef2 | When Is an Apple Not an Apple?
Thoughts on Wolfgang Smith’s Metaphysical Approach to Quantum Physics
by Dr. Alec MacAndrew
Wolfgang Smith has initiated a collaboration with Rick DeLano culminating in their planned 2019 film The End of Quantum Reality. As I understand it, much of the film has its roots in Dr Smith’s earlier works on the intersection between physics, metaphysics and religion, such as his 1995 work (reissued and updated in 2005), The Quantum Enigma. Smith also espouses geocentric, Young Earth Creationist and other pseudoscience views, not to mention bizarre and syncretic ideas such as numerology, astrology, and Hindu esoterism, but I intend to concentrate in this piece on his metaphysical approach to quantum physics, as set out most comprehensively in The Quantum Enigma. He bases the quantum physics aspects of his 2019 book, Physics and Vertical Causation: The End of Quantum Reality on the earlier work, to which he adds the geocentric, YEC and anti-relativity views.
Whereas many of the neo-geocentrists are sadly unequipped to deal with the material they address, Dr Smith was educated in physics and mathematics, has published papers on mathematics, and has been on staff in some capacity at one or two good universities. Therefore, his opinion on the meaning of quantum mechanics at the most profound level is, at least on the face of it, worthy of consideration.
Cartesian Dualism and Modern Physics
In The Quantum Enigma, Smith presents what is, at root, a teleological thesis based on an idiosyncratic interpretation of quantum mechanics. Smith believes that the world started to go wrong with Galileo, and since then the ancient wisdom, the hylomorphic concept and the connection with the transcendent was lost[1]. He charges Newton, Descartes, Darwin and Einstein with these crimes, but he reserves his strongest vitriol for Descartes, and particularly for Cartesian mind-body dualism, or the ‘bifurcation’ of reality. He lays the foundation for the rest of the book with an exposition of what he believes to be this fundamental misconception in modern thinking. As we shall see, his solution introduces a fundamental and, as we shall see, unwarranted bifurcation of reality in its own right.
According to Smith, moderns follow the Cartesian error, which is to separate reality into mind and matter, the internal and external reality, res cogitans and res extensa. Cartesian philosophy implies that we cannot truly know the world, because we are trapped in our minds and we can never know whether the impressions our minds receive via our senses are true reflections of the external world (or indeed whether there is an external world at all). We can never know whether the impressions you receive correspond to mine, ultimately a question of what modern philosophers call qualia, the experience of qualities in the external world.
Smith never properly explains how accepting the idea of mind-body dualism (the Cartesian assumption as he calls it), either consciously or unconsciously, leads to all the errors of modern thinking that he claims to exorcise, and how abandoning it allows the scales to fall from our eyes and all that is paradoxical in quantum mechanics to become clear. Smith’s inability or unwillingness to set out his case clearly against “the Cartesian assumption” is frustrating. We see that he is railing against something, but he identifies neither the thing nor the rationale for his opposition, nor what benefits would accrue from giving it up. His chronic lack of clarity on this point dates back at least to the 1995 edition of The Quantum Enigma and is still present in the 2019 Physics and Vertical Causation. In fact, so far as the latter work goes, a reader attempting to understand Smith for the first time with no exposure to his earlier works will struggle to discern a coherent train of thought that commences with the fact of Descartes’s philosophy and concludes with Smith’s assertion that eliminating it with the help of quantum mechanics would solve any number of philosophical and scientific puzzles faced by modern thinkers.
Of course, one recognises the controversy in modern philosophy regarding Cartesian dualism, expressed in various ways by critics such as A N Whitehead, David Griffin and Paul Churchland. Smith conjures with the name of Whitehead[2] without describing or otherwise engaging in Whitehead’s process philosophy or exploring how Whitehead proposed to resolve the question of Cartesian dualism. For Smith, Whitehead is more valuable as a stick with which to beat Descartes than a philosopher whose ideas have value in their own right. Of Griffin and Churchland, there is no mention. So, although substance dualism lacks neither critics nor difficulties, which we are free to explore for ourselves, we are bound to read Smith as he writes, and, despite castigating “the Cartesian error”, he is vague in The Quantum Enigma, to the point of incoherence, about the philosophical problems that he believes are entailed by Cartesian thinking.
For example, one struggles to see how the phenomenology of Husserl, whose name Smith also wields like a magic wand3, can be invoked as a talisman against Cartesian thinking, since Husserl takes Descartes’s methods as a spring board, and creates an entire philosophy of phenomena from first principles, much as Descartes created an epistemological philosophy from first principles[3]. Perhaps Husserl can be regarded as having created a devastating critique of “the Cartesian premises”, and perhaps not. Smith shines no light on the question. And this characteristic of Smith, rarely to engage properly with others and never to engage with those who raise difficulties for his thesis, pervades all his work.
On this matter of Cartesian dualism and its relevance to the interpretation of quantum mechanics, Smith follows, to a great extent, the argument set out by Werner Heisenberg in his 1962 book, Physics and Philosophy[4]. The differences between them are that Heisenberg’s exposition of the issue is infinitely clearer and better constructed than Smith’s; that Heisenberg examines and finds wanting the Cartesian proposition of the separation between the observer, the I, and the rest of the world, rather than the mind-body, res cogitansres extensa dualism which is its consequence; that Heisenberg concludes that the empirically determined facts of quantum mechanics undermine the idea of an objectively existing external res extensa that can be known fully and without ambiguity; and that Heisenberg, unlike Smith, is clear that abandoning Cartesian thinking does not, to any extent, resolve the fundamental paradoxes of quantum mechanics.
It turns out, anyway, that Smith is tilting at windmills, because Cartesian dualism is far from being the predominant paradigm amongst modern physicists and philosophers. Amongst the scientific community, realism and monism prevail. Realism is the doctrine that our senses, with or without the help of instrumentation, give us a more or less accurate representation of a world that exists in reality and independently of our perceiving it, and that truth is the correspondence between any proposition or model in question and reality. Unless one believes that both direct perception by the senses and measurement using instruments provide knowledge about the state of reality, it seems pointless to do physics at all. Certainly, it would be hard, if not impossible, to justify a solipsistic physics, and so scientists overwhelmingly subscribe to realism rather than idealism.
Monism, as conceived by modern scientists and philosophers, stands opposed to Cartesian dualism and holds that mind and body are not separate entities. A popular version of monism, which relies on the concept of emergence, describes mind as an emergent property or process of body, or more precisely of brain (the philosophical stance known as physicalism). According to this hypothesis, mind is what brain does. Smith seems to think that modern scientists increasingly hold that mind is quite separate from matter[5], quoting the neurosurgeon Wilder Penfield. In any case, he fails to acknowledge that much expert neuroscientific opinion does not now support Penfield or the dualism hypothesis, and the evidence is extensive and growing that the activity of the mind proceeds sufficiently from neural activity, based on the observation of strict supervenience, the effects of neurosurgery and brain pathology, and other neuroscientific evidence. His belief that physicists are overwhelmingly in thrall to Cartesian dualism cannot possibly arise from any actual acquaintance with people working in this field. One wonders how he arrived at the idea that Cartesian thinking has modern physics and philosophy in its grip? He can have reached that conclusion only with the help of dated and secondary sources, Whitehead and Heisenberg perhaps, as he certainly has not tested it for himself.
Although most working scientists hold realism, emergentism and monism pragmatically, without examining them critically or thinking through the implications of their stance, nevertheless there exist well-argued and well-developed philosophical grounds for them. Realism in the philosophy of science has been explored by Russell, Wittgenstein and Popper. Neutral monism, a metaphysical school based on the work of Mach, James and Russell, is continued today by, for example, the work of Chalmers, and Ladyman and Ross, and especially Erik Banks who has developed neutral monism in a study in which he refers to his updated version as realistic empiricism. Emergentism has a history that dates back as far as John Stuart Mill. More recently, C. D. Broad is widely accepted as developing the standard, modern description of the position. Various forms of emergentism are currently being proposed and vigorously debated by metaphysicians such as Kim, McLaughlin, O’Connor and Humphreys. This is not the occasion for a discussion of these ideas in detail; it is sufficient to note that this extensive literature exists and that Smith does not refer to any of these influential thinkers; in fact, does not even acknowledge the existence of these schools, which are obviously opposed to his assertion that substance dualism prevails in modern thinking. Let us understand that I am not seeking to take sides in the dualism-monism debate, to defend either substance dualism or monism here. It is not a matter of which of these alternatives in all their variations are true – that is not the point. It is enough to realise that the claim that Cartesian dualism currently prevails with philosophers and scientists is simply untrue. So, it transpires that Smith’s entire critique of “the Cartesian assumption” is uncalled for and misdirected, leaving the reader with a powerful sense that the first two chapters of the book have gone awry.[6]
I note in passing, that on the question of dualism, he seems to be arguing against himself, revealing a surprisingly muddled way of thinking about the problem of the processes that enable perception. Having examined the “ghost in the machine” or the “Cartesian theatre” doctrine of visual perception, the idea that there is a separate mind which observes an image in the brain produced by the visual neuro-system, and having rightly found that concept wanting, he concludes, “…the missing piece of the puzzle must be strange. Call it mind or spirit or what you will…”[7], thereby coming full circle and plumping for a solution that is indistinguishable from mind-body substance dualism of a distinctly Cartesian kind.
The Physical and the Corporeal
Smith then turns his attention to the concepts of quality and quantity. He characterises the quality of objects as being those attributes which are directly perceived, such as colour, smell and so on. Conversely, a quantitative attribute is measurable which results in a mathematically or at least numerically based result, such as the reading of a pointer on a scale. (In discussing this point, Smith conflates the concepts of mass and weight, as defined within physics, a surprising thing for a “physicist” to do. His claim that mass is a contextual attribute is also wrong – non-relativistic mass is an invariant and non-contextual attribute of an object.) Qualities are not measurable and must be directly perceived. He then proceeds, at some length, to propose that quantity is an attribute of physical objects, and these are the proper and only objects studied by physics, whereas qualities are the properties of what he calls corporeal objects and can be directly perceived through an intellective, and in his view, momentous act. To use his jargon, the intellective faculty perceives reality instantly and directly, as opposed to the rational mind which reasons from data to conclusions about reality. It is important to recognise that the term ‘perception’ extends to all the senses and is not limited to visual perception, although the bulk, if not all, of Smith’s examples turn on vision.
This distinction, between the physical and the corporeal, is, in his scheme, a categorical one, his claim being that physical and corporeal objects are ontologically distinct. So far as a single object goes, for example an apple, there is both a physical object (which he denotes by ‘SX’) and a corporeal object (which he denotes by ‘X’) which are correlated, in that the corporeal object is the manifestation or presentation of the physical object on a higher ontological plane. Physics exclusively studies physical objects and their quantities and is incapable of discerning qualities which pertain only to corporeal objects. He associates the physical with Aristotle’s matter and the Scholastics’ substance; and the corporeal with the matter given form, the substantial form according to the hylomorphic principle. His claim is that physics is incapable of acknowledging or studying qualities and can therefore know nothing of the essence or “quiddity” of a thing, which is manifested in its qualities. It cannot tell us what a thing is because its essence and its qualities are outside the scope of physics. For Smith, therefore, true higher knowledge of a thing can be gained only by consideration of its qualities by direct perception without the aid of instruments[8]; physics can only access quantities and is limited to consideration of the substance or the matter of things, their essence and form being inaccessible and invisible to physical methods. Smith’s distinction between the physical and the corporeal, between quantity and quality, is crucial for his scheme, as he goes on to argue that the collapse of the quantum mechanical wave function, which I will discuss below, is nothing other than the physical becoming manifest in the corporeal plane, and so proceeding from potency to act.
Does this distinction between ‘physical’ and ‘corporeal’, as defined by Smith, stand up to scrutiny? Well, he does rather gloss over the point, not by devoting insufficient words to it – words there are aplenty – but by failing to develop his proposition with clear, unambiguous definitions and with lucid illustrations that generalise his point. He is obviously happier to exploit the distinction than to establish its existence in the first place. His muddle can be illustrated by reference to his own examples of which attributes constitute quantities and which constitute qualities.
For him a physical attribute is anything quantifiable and measurable by physicists, in every instance by using an instrument, even if it is merely a tape measure. To illustrate this definition he offers, as an example, the attribute of mass. Physicists measure mass, he says, by placing an object on a set of scales, the necessary instrument, and by reading a pointer. (Actually, in this example what is being measured is weight, and mass is inferred from Newton’s second law.) Weight is not directly perceptible, or so he claims. But surely, we can directly perceive the difference between the weight, and hence the mass, of a feather and of a cannonball, although our direct perception of weight is not quantitatively precise (as are all direct perceptions more or less quantitatively imprecise). Indeed, our ability to discriminate between different weights becomes better when two objects are in question that can be compared side by side. Because of the possibility of direct perception of weight by hefting an object, using the kinaesthetic and other senses, then, by his own definition, mass must be a quality as well as a quantity[9].
When a baker is preparing bread, he might weigh the ingredients, flour and water, with kitchen scales, an instrument. An experienced baker could equally well prepare a weight of the ingredients which he directly perceives to be right, kinaesthetically, without using scales. According to Smith’s proposal, in the first case the weight of flour and water is measured and quantitative and thus a physical attribute, and in the second case the weight is directly perceived and qualitative and thus a corporeal attribute, and the two are ontologically separate. On the face of it, this is ridiculous. So, it is the case that weight is an attribute which can be measured, and which can be directly perceived.
His example of a quality is colour, specifically that of a red apple. He states quite clearly that a colour is a quality because
“…redness…unlike mass, is not something to be deduced from pointer readings, but something, rather, that is directly perceived. It cannot be quantified, therefore, or entered into a mathematical formula, and consequently cannot be conceived as a mathematical invariant.”
Well redness can obviously be directly perceived and distinguished from other colours, provided one is not colour blind. Even for a “healthy” observer, where red shades into orange, there might be some dispute as to how the object is more accurately described, whether it is better described as red or orange; where the red is not fully saturated, the disagreement might be between red and pink; whereas for objects that reflect blue as well as red light, the discussion might be whether the object is red or purple (magenta). So, redness is not a single trivially perceived quality, but it merges imperceptibly (and sometimes controversially) into other colour qualities, such as pink, orange, and purple. The reason for these gradual transitions is that objective colours lie on a continuum[10] as do the wavelengths of light which correspond to the colours. As in the case when comparing the masses of objects, the colours of objects can be more easily discriminated by direct perception if they can be compared side by side. And crucially, and in direct contradiction to his statement above, the colour of an object can be more accurately determined by quantifying it with an instrument and deducing it from pointer readings, or their equivalent.
The colour of an object can be accurately determined by a spectrophotometer, which measures the intensity of light reflected from it as a function of wavelength and presents the result as a (Cartesian) graph of reflectivity versus wavelength. The perception of colour is contextual – a white object will appear red in a red light, but here again, by measuring the reflected light, the instrument will accurately determine the colour of that object in that context. Colour is directly correlated with wavelength(s) of electromagnetic radiation: for example, a 633nm laser produces red light, a 442nm laser produces deep blue and 528nm green. Photographers are aware of how important colour calibration is throughout the photographer’s workflow in order to ensure the photographic print is as faithful a copy of the original scene as possible. Photographers, and digital photographic equipment, use various quantitative schemes to represent colour, including RGB (red-green-blue) and HSL (hue-saturation-luminance) values. I can say the apple is dark red, or I can say it is #7d122b in hex RGB. The latter is much more precise than the former, but both describe the objective attribute of the apple’s colour.
In one sense, colour can be regarded an objective attribute, that is, it belongs to the object. Colour is also perceived and so, in this sense, can be regarded as subjective. There is an obvious correlation between the two aspects. In the human colour vision pathway, different colours stimulate the three types of cone receptor to different extents, and the absence of one or more types of cone causes defective colour vision, manifested as an inability to discriminate colours (most commonly red and green) which are clearly differentiated according to observers possessing all three types of cone. The effective quantification of colour within the human visual system is therefore seen to be a crucial part of the process of visual perception, which culminates in the brain state we refer to as red or whatever colour we perceive. According to this perspective, colour is quantified not just with instrumentation, but within the visual system as an essential feature of colour perception. The colour perceived therefore depends on the objective colour attributes of the thing perceived, and on the physics and biology of the visual pathway; and is analogous in this respect to the perception of any objective qualitative attribute via the relevant sensory pathway. Nevertheless, the objective colour, the attribute possessed by the object, is not affected by the inability of someone with defective vision to perceive it “correctly”, and, in fact, we can be sure that someone has defective vision if they are unable to distinguish quantifiably distinct colours that can be distinguished by those with normal vision; and moreover, defective vision diagnosed by testing is usually found to be caused by an underlying physiological defect, in this case the lack of a type of cone receptor. Because of the possibility of measuring and quantifying objective colour, then, by Smith’s own definition, the attribute of colour must be a quantity as well as a quality.
It seems, from the foregoing, that the cases of mass and colour are equivalent. Mass can be measured quantitatively but can also be directly perceived. Colour can be perceived directly but can also be measured and quantified. The critical distinction that is at the foundation of Smith’s thesis is therefore a distinction without a difference, on the grounds of his own examples. In fact, we are done here, having reached page 12 of a book of more than 100 pages – if the foundation is rotten the building cannot stand.
It is true, of course, that there are real attributes of objects, which, as humans with our human sensory apparatus, we cannot directly perceive. Examples include the magnetisation and electrostatic charge of objects, which can be observed with instruments, but which we cannot sense directly. This limitation, however, lies not in the ontological status of the attributes, but in the sensory apparatus that has evolved in human beings. Other creatures do have the ability to perceive these attributes directly. For example, homing pigeons can perceive the direction of the Earth’s magnetic vector and use this percept to navigate. Many species of fish, as well as the platypus, directly perceive electric fields, and the perturbation in electric fields caused by the presence of objects. So, the issue turns out to be related purely to the available sensory apparatus of the organism, rather than to a fundamental ontological difference between the different sorts of attribute. Moreover, Smith’s essential claim is that qualitative attributes cannot be studied by physical methods, and it is this limitation of physics which he puts forward to uphold his view that corporeal objects, which, according to him, alone possess qualitative attributes, are on a different and higher ontological plane from that plane which is studied by physics. However, we have seen that the idea that physics cannot study qualities is wrong – physics is able to study any objective attribute, colour, timbre, taste, smell, tactile feel and so on, that he asserts to be purely qualitative.
Let us consider a potential criticism of my analysis above. It is too facile, too simple to be taken seriously, the critique goes. I am making a category error in claiming that redness can be measured and quantified, because redness is a subjective property, it is, in modern parlance, a quale, the experience of redness, and no-one knows in detail how qualia arise and whether one person’s experience is similar to or corresponds to another’s. However, that is not a criticism that Smith or the Smithians can raise, because to do so would be to fall headlong into the philosophy they deplore, the Cartesian pit. When Smith discusses quantities and qualities, we can take it that he is referring to objective attributes, attributes that belong to the object, which can be, but need not be, perceived, and which persist in the absence of perception. In fact, he explicitly states this to be so. He writes: “So far as objectivity and observer-independence are concerned, therefore, the case for mass and for color stand equally well; both attributes are in fact objective and observer-independent in the strongest conceivable sense.”[11] So, introducing the complication of qualia, the ineffable personal experience of attributes, or any other observer-related consideration, makes no difference to my argument above.
Next, let us look at colour from a scientist’s point of view, and see how a scientist’s perspective of, say, a red apple informs us about its attributes, and what that knowledge tells us about the essence of the thing. Of course, the scientist can perceive the colour of the apple directly just like the next person, so that quality is obviously available to him along with its aesthetic and symbolic associations in all their richness, but he is able to see much more. He understands, for example, that the colour arises from the reflection of a particular part of the visual electromagnetic spectrum (or to put the inverse case, by the absorption of the part of the spectrum other than red), by certain pigments in the skin of the apple. The pigments in question are anthocyanins, which appear in many varieties of ripe fruit (and incidentally in the leaves of some deciduous plants in the autumn, giving them that rich New England fall colour). The scientist knows the molecular structure of various anthocyanins in detail and understands how the interaction of that structure with white light results in the reflection of red light and the absorption of the other colours by resonance at the absorbed frequencies. He knows that different anthocyanins have somewhat different structures, resulting in electronic resonances which cause the pigments to reflect different shades of red and purple. He understands that the perceived colour is also a function of the concentration of the pigment molecules in the skin of the fruit. He understands the biochemical pathways that the plant uses to make anthocyanins, and he even understands how DNA, which is active in every cell of every apple tree, dictates the production of different anthocyanins in different strains of fruit, and which genes are responsible. He knows that fruit has evolved to aid seed dispersal and that the colour of ripe fruit has evolved as a signal to animals (and man) that the fruit is ripe, thereby optimising the seed dispersal. He understands that anthocyanin synthesis pathways originally evolved in plants to protect them from light-induced damage at various stages of their growth, so it seems that the protective pigment was co-opted to a new signalling purpose later in evolution. It seems incontestable that understanding these other aspects of the redness of apples can only add a deeper and richer knowledge of the apple to that provided by direct perception of its qualities, and that this additional knowledge tells us more about the process, the end and the essence of an apple; that is, how it works, what it is for, and what it is.
Richard Feynman eloquently made this point about flowers[12], and Richard Dawkins’s Unweaving the Rainbow[13] is an extended essay on how science enriches rather than impoverishes one’s relationship to and understanding of the natural world (the title is an ironic quotation from Keats’s poem Lamia, where it is claimed that Newton’s discovery of how a rainbow is formed robs it of its mystery: “Philosophy will clip an Angel’s wings, Conquer all mysteries by rule and line, Empty the haunted air, and gnomed mine – Unweave a rainbow”. This sort of misapprehension also seems to have been at the root of Goethe’s demonstrably erroneous theory of optics, which was conceived as a crusade against Newtonian optics).
In a passage critical for Smith’s distinction between corporeal and physical objects, he tells the parable of the billiard ball[14]. The billiard ball that we perceive is a corporeal object, X, he claims, and associated with it there is a physical object, SX, which can be represented by, for example, “a rigid physical sphere of constant density” (by “constant density”, I take it that he means “uniform density” which means something quite different). He continues: “The crucial point, in any case, is that X and SX are not the same thing. The two are in fact as different as night and day, for it happens that X is perceptible, while SX is not.” In order to prove this point, he attempts to show that SX, the physical object, is not perceptible, because “It can also be represented in many other ways. For instance as an elastic sphere…”. He goes on to make a further claim, that the physical object is not perceptible because it is composed of atoms or subatomic particles, and collections of atoms cannot be perceived. He offers no evidence or argument for the latter statement but states it as a truism, obviously intending the reader to take it as self-evident. What we perceive, he insists, is an object, the identity of which is indisputable, a red or green billiard ball (and it seems that here, and elsewhere, he limits perception illegitimately to the visual sense).
But this is to beg the question: before Smith can characterise corporeal and physical objects and discuss the distinction between them, he must show that there are, in reality, two associated but ontologically distinct objects, a task which he shirks. For when we perceive a billiard ball, we know it as such, precisely because it is, and is perceived to be, within practical limits, a smooth spherical ball, of a certain uniform density, with a certain hardness and elasticity, and with a certain diameter. In other words, in modal terms, these attributes, shape, density, uniformity, size are essential rather than accidental properties of the billiard ball. A ball that is not spherical, or that has non-uniform density, or is 3mm in diameter, or is made of sponge or iron, is not a billiard ball at all and a person familiar with billiard balls would not perceive it to be one. In order to perceive the ball as a billiard ball, with an identity beyond dispute, one must perceive that it possesses these essential attributes. The attributes, sphericity, density, uniformity, size do not constitute a separate object on a different ontological plane – they are essential attributes of the one perceptible object. Nor are they objects in their own right, as Smith seems to treat the rigid sphere. As for his diktat that collections of atoms are not perceptible, one reflects that everything we perceive is, indeed, a collection of atoms, and that therefore collections of atoms are, after all, perceptible as the object they constitute, just as collections of grains of sand are perceptible as a beach. For all the bluster of the parable of the billiard ball, what is left is a bare assertion, decorated with the conjuring words corporeal and physical, amounting to no more than a magical spell or incantation which evidently holds his disciples in thrall.
It seems that Smith was influenced in his ideas that the physical sciences can access only quantity, corresponding to substance, by the French esotericist and metaphysician, René Guénon [15]. Guénon, was one of the founding triumvirate of the philosophia perennis school along with Ananda Coomaraswamy and Frithjof Schuon. He is known as an arch-Traditionalist, hermeticist, gnostic, freemason, Sufi, symbolist and numerologist, whose over-arching notion was that the world goes repeatedly through a multi-millennial cycle and has just now reached the lowest point in the aftermath of the Age of Reason and the Enlightenment. Guénon sneers at what he calls “profane science”, apparently a degenerate residue of the “ancient traditional sciences”, and “profane arithmetic” and geometry in the modern sense. His sacred geometry is the foundation of arcane symbolism, his sacred number science is nothing more than rank numerology and his ancient traditional sciences include astrology, alchemy and other activities associated with hermeticism and occultism. This is all mumbo-jumbo, hardly worthy of serious consideration. To the extent that he relies on it, it taints Smith’s argument[16]. But then Smith is foremost an esoteric Traditionalist in the mould of Guénon and only secondarily a Christian philosopher interpreting Thomism for the 21st century – it is unsurprising to discover that he subscribes to and incorporates into his work much of this nonsense[17].
So, Smith’s argument for his imagined distinction between the physical object and the corporeal object, such as it is, turns out to be illusory. There can be no ontological distinction between these objects, because there is only one object, to which his own examples attest, and his idea that a corporeal object is the presentation of a physical object on a higher and distinct ontological plane is seen to be a mere fancy, a superfluous bifurcation of his own. In every case, including the examples proffered by Smith, there is only one object, with attributes of many kinds, some of which we perceive directly through our senses, some of which we observe and measure with instruments, and some of which, including his own examples of mass (or weight) and colour, we can directly perceive and measure with instruments. The exact correlation between our direct sense data and the results of measurement of any attribute confirms that our senses and instruments are accessing the same attribute of an ontologically single object. We kinaesthetically perceive feathers to be light and cannonballs to be heavy and this is borne out by the position of the pointer on the scales. We see a red object and the spectral peak of its reflected light lies between 625nm and 740nm. So, the scientists’ instruments can be regarded as tools that extend the reach of the senses, much as tools of manipulation, levers, knives, saws, hammers and so on extend the reach of the arms, with neither category of tool conferring any special ontological status onto the objects on which they act.
Introducing the Quantum World
How can we get to the crux of Smith’s book if it falls at such an early hurdle? Could it be argued that what distinguishes the physical from the corporeal object is not that the physical object is devoid of qualities, and is pure quantity, as Smith would have it, but by some other distinction, such as the size of the object. Might this be a way to save Smith’s project? Could it be that the critical distinction that Smith is searching for is between the quantum and the macroscopic world? In Physics and Philosophy, Heisenberg explores this distinction and proposes that the relationship of the quantum domain to the macroscopic domain is very like the Aristotelian relationship of potentiae to actualities[18].
Following Heisenberg, can we identify the macroscopic world with Smith’s corporeal plane, and quantum objects with the physical plane? With some reservations, this might be a valid distinction. It is, at least, worth exploring. Of course, Smith does not accept that the physical and the corporeal can be classified in this way, insisting that the distinction between physical and corporeal extends to macroscopic objects, that there exists for every directly perceived corporeal object X, a physical object SX, the potential not-thing that is studied by physics.[19] He quarrels with Heisenberg’s conflation of X and SX for macroscopic objects, or rather, Heisenberg’s implicit rejection of the existence of SX. But we have seen that his definitions of physical and corporeal, which encompass macroscopic objects, as well as quantum objects, are incoherent, so I am bound to find a definition that makes some sort of sense if we are to continue to follow his argument.
The chief reservation in accepting this amended definition of the physical and corporeal is that there is no distinct size-related boundary on one side of which objects behave as quantum objects, and on the other side as macroscopic objects. The key question here is whether there is a maximal limit to the size of quantum objects. Objects as large as molecules of 114 atoms have been observed to behave as quantum objects in the Young’s double slit experiment described below. It seems that there is no limit in principle to the size of an object which can be made to behave as a quantum object (although, in practice, this becomes more difficult the larger the object, and for macroscopic objects, practically impossible), and quantum theory supports the view that quantum behaviour is not limited in principle by size except by interaction with the environment. Smith implicitly acknowledges that this perspective is correct by discussing the meaning of the de Broglie wavelength for a macroscopic object. Be that as it may, we can continue to follow his argument if we define the physical domain to be restricted to obvious quantum entities such as photons or electrons, which fall indisputably on one side of the ill-defined boundary, which clearly behave unlike classical macroscopic objects, and which possess attributes which cannot be directly perceived, such as inherent spin, isospin, parity and colour charge, all of which are quantised attributes and therefore tightly linked to their quantum nature. This does not do any violence to Smith’s argument, as he relies on examples of the behaviour of just such quantum particles to develop his hypothesis.
A summary of Smith’s hypothesis goes as follows: he identifies quantum objects, and the quantum world in general with the scholastic concept of potency; and the collapse of the wave function[20] with a change of state from potency to act through a transformation from the quantum or physical plane to the corporeal, directly perceivable plane. We shall expand on these points later. One major interpretational difficulty that quantum objects present is known as the measurement problem, and Smith relies on the single-particle Young’s double slit experiment to illustrate it. A description of this pivotal experiment, using light as an example, follows. The principle applies to any pure quantum object – fundamental particles such as electrons, neutrons and protons, atoms, atomic nuclei, small molecules and so on.
In the classical Young’s double slit experiment a beam of temporally coherent monochromatic light is passed through two parallel slits. The width of each slit is a few times the wavelength of the light or less. With both slits open, a pattern of dark and bright bands or fringes, in the same orientation as the slits, can be detected on a screen placed beyond the plane of the slits. The separation between the fringes can be shown to be inversely proportional to the separation of the slits (the closer the slits are, the broader the fringes are), and proportional to the distance from the slits to the screen, and the phenomenon can be modelled precisely by interpreting the fringes to arise from the constructive and destructive interference of wavefronts arising from the two slits. If we close one of the slits, so that light can pass only through the other, then the fringe structure disappears and is replaced by a single bright area without fringes. This area is coincident with the region of the screen where the fringes previously appeared[21]. The disappearance of the fringes is readily explained by the fact that, in this case, the wavefront incident on the screen arises from only one slit, so that interference no longer occurs. This experiment demonstrates the fact that light can be considered to consist of waves, which is perfectly consistent with classical electromagnetic theory in which light is regarded as an electromagnetic wave.
But as Einstein demonstrated, light is quantised, it comes in minimal packets or quanta called photons, each of which have energy proportional to the frequency of the waves mentioned above times a constant (known as Planck’s constant). So, light can also be regarded to consist of a stream of particles. If we reduce the intensity of light in Young’s apparatus to a value so low that individual photons can be detected, and replace the passive screen with a screen that records the arrival of each photon by, for example, a localised flash, then we will see what appears initially to be randomly located flashes as each photon traversing the apparatus is detected at the screen. However, if we record the position of each flash, then we find that, over time, there is a greater concentration of flashes in those areas of the screen where the classical interference fringes were previously bright, and fewer or no flashes where the classical interference fringes were dark. In fact, after a large number of flashes have been recorded, the area density of flashes (number of flashes per unit area) follows the same function versus position across the screen as the intensity in the classical case[22]. Furthermore, if we close one slit, (or acquire “which path” information by observing which slit the photons pass through, which can only be done by absorbing the photons at one or other slit, in effect, blocking it) the fringes disappear as in the classical case. So, it appears that light particles have wave-like properties that allow them to interfere with one another.
If we reduce the intensity even further, so that we are sure that only one photon is present within the entire apparatus at any one time, surprisingly, we observe the same thing as before. After a large number of photons are detected, fringes appear exactly as before as a modulation of the area density of detected photons; and disappear if one or other slit is closed (or we measure through which slit each photon passes, which can be done only by effectively blocking one of the slits). It seems that a photon can interfere with itself but can only do so if both slits are open and we do not know which of the slits each photon passes through. If we put a detector at the slits, then we only ever detect a photon passing through one slit at a time. Yet the fact that the fringes disappear when one slit is closed so that all the photons pass through the other, seems to indicate that photons “know” when passing through one slit whether the other is open or closed, and that when both are open, they somehow pass through both, even though, when detected, they are only ever localised in one or the other. When the position of a photon is measured it is localised, but before it is measured it appears that one cannot say anything definite about its position, and in fact it seems that it is not a meaningful question to ask where the photon is before it is measured. This is the measurement problem, as conceived according to the Copenhagen interpretation of quantum mechanics.
The single photon Young’s experiment does not have a simple classical explanation. Quantum particles, such as photons, electrons and so on, do not behave in a way that can be described or explained in purely classical terms. Quantum objects display several other strange effects, including entanglement (measuring one attribute of a member of an entangled pair of quantum particles, fixes the attribute in the other particle, regardless of how far apart these particles are), the apparent loss of direct causality in quantum processes such as nuclear decay (individual instances of the decay of atomic nuclei with the emission of particles, such as electrons, alpha particles or electromagnetic radiation, does not appear to have a proximate cause, although it does have a precise half-life which precisely quantifies the statistical probability of decay), and quantum tunnelling whereby quantum particles can “tunnel” through potential barriers in a way that is forbidden in the classical world. One possibility is that these apparently strange effects can be explained by local hidden variables, also referred to as local reality. (Local hidden variables are classically deterministic effects which affect the particles, but which cannot be detected experimentally and are not accounted for in theory. Local hidden variables would imply that there are physical effects not described by quantum mechanics, making it an incomplete theory). John Bell’s theorem distinguishes between the predictions of classical physics combined with local hidden variables and those of quantum mechanics. Experiments have demonstrated almost, but not quite without doubt that the predictions of quantum mechanics are correct, and that therefore local hidden variable interpretations are ruled out.
The results of experiments which all but confirm the predictions of Bell’s theorem mean that either deterministic reality (the ability to speak meaningfully of the state of a quantum object before it has been measured, known in the trade as counterfactual definiteness) must be abandoned, or that deterministic reality is non-local (influences propagate faster than the speed of light). Currently, a majority of physicists choose the former option, because the latter, in its most simple flavours, violates the axioms of special relativity. Those who choose to abandon counterfactual definiteness, also abandon the notion that, knowing the state of an attribute of a system at any time, it is possible in principle to predict its future and past states to an arbitrary level of precision.
The Interpretations of Quantum Physics
The most popular interpretation of quantum mechanics, the Copenhagen interpretation, abandons determinism and counterfactual definiteness. According to the Copenhagen interpretation, a quantum object (photon, electron and so on), before it is detected or measured, is in a superposition of states, and the Schrödinger wave equation describes the evolution of the probability that it will be in any given state. When the quantum particle is detected or measured, the wave function is said to collapse to a single state (an eigenstate) with a probability density given by the square of the wave function at that time – so in the case of the Young’s double slit experiment, the photon or electron is detected at one or other slit or at a particular location at the image screen with a probability which matches the intensity or energy distribution of the classical Young’s experiment. There is a problematic role for the observer, or at least for the measurement apparatus, in causing the collapse. Smith’s interpretation of this phenomenon is that the state of superposition corresponds to the scholastic state of potency and that the collapse of the wave function corresponds to the state being actualised, a transition from potency to act, from physical to corporeal. Smith argues that the quantum world is not in act, does not actually exist, and that therefore physics, in this sense, studies entities which do not exist, but which are merely potentiae.
Here, and elsewhere, Smith erroneously equates quantum physics with physics in general – and he commits the fallacy of composition in ascribing features of quantum physics to the whole of physics. When one studies statistical thermodynamics, electromagnetism, geometrical or physical optics, plasma physics, classical mechanics, relativity or astrophysics, one is undoubtedly doing physics, but little or no consideration of weird quantum effects is needed.
Be that as it may, Smith’s argument is that physics is powerless to study the world as it is, what he calls the corporeal world, the world that we perceive directly, because the physical world that physics does study can now be seen to be non-deterministic or non-local, or both. According to him, the project to explain the world fully, initiated by Galileo, and pursued by Newton, Descartes and countless physicists since, is doomed to failure because the actual world is on a different and higher ontological plane from the objects studied by physics and because the events in the world can no longer be seen as the consequence of deterministic interactions of fundamental particles (atoms in the philosophical sense). This claim depends, of course, on whether Smith has given a correct and comprehensive description of the relevant aspects of quantum mechanics on which he relies.
There are notoriously many interpretations of quantum mechanics, few of which make unique predictions that can be tested by experiment, and which therefore are not scientific hypotheses in the strict sense. This has led some commentators to lose patience with attempts to interpret the underlying meaning of the very accurate quantitative predictions of quantum mechanics, prodding Feynman to make his famous quip, “Shut up and calculate”; or Hawking to remark, “When I hear of Schrödinger’s cat, I reach for my gun”. Nevertheless, interpretations abound, and although the Copenhagen interpretation on which Smith bases his thesis is the most popular (it is also known as the “standard” interpretation), it is far from being the only one. There are other interpretations, equally conforming to the predictions of non-relativistic quantum mechanics, which do not depend on the superposition of states and the collapse of the wave function[23].
Take the de Broglie-Bohm interpretation (also known as the pilot wave interpretation). According to this interpretation the quantum object is always in a single definite state (it is counterfactually definite), it is deterministic, and there is no role for an observer in collapsing a superposition of states (there is, in this scheme no superposition of states – quantum objects are always in a defined state). Smith does not discuss this interpretation at all in The Quantum Enigma (except to identify the Bohmian concept of a universal wavefunction, which is a necessary element of Bohmian mechanics, with what he calls Nature, the underlying ground of reality, without acknowledging that a universal wavefunction forms no part of the Copenhagen interpretation on which his argument rests) and fails to notice that the de Broglie-Bohm interpretation does not comport with his metaphysics. I note in passing that this is Smith’s normal method of argumentation – he ignores or gives short shrift to facts that stand against his proposition[24]. He does mention the de Broglie-Bohm interpretation in his 2019 book, Physics and Vertical Causation: The End of Quantum Reality, but dismisses it in less than a paragraph on the grounds that the collapse of the wave function is instantaneous, thus outside time, and therefore cannot be described by the Bohmian “differential equations”. In doing so, he rather misses the point that Bohmian mechanics does not rely on this collapse, this event outside time, at all.[25]
Other interpretations which do not call for superposition and which are compatible with counterfactual definiteness include Everett’s Many Worlds interpretation, Cramer’s Transactional interpretation and Nelson’s Stochastic interpretation. The first two of these are also deterministic. I don’t propose to discuss these in any detail – a full discussion of all the interpretations of quantum mechanics with their ongoing developments would fill an entire book or more. There is a vast literature on the subject. For our purposes, it is enough to note that any viable interpretation must and does make the same correct empirical predictions as the Copenhagen interpretation, for example with regard to experiments such as the Young’s double slit, the quantum eraser, delayed choice experiments, and EPR type (entangled particle) experiments. The fact is that these interpretations cannot be distinguished empirically, and so are not strictly physical theories[26]. Of course, people are attempting to develop interpretations which can be distinguished empirically, but as things stand, the choice of interpretation is largely a matter of personal preference. Smith has chosen to build his metaphysical thesis on the Copenhagen interpretation, and there is nothing inherently wrong with that, but he fails to expose this limitation to his readers, most of whom will not be aware of it. It does not matter for our purposes which, if any, of these interpretations is correct. What matters is that interpretations exist which satisfy the empirical constraints, but which do not comport with his metaphysics. Building his thesis around one interpretation lessens its import – if his argument were to follow necessarily from the observations, rather than from one interpretation amongst many, it would carry more weight than it does. As it stands, it is little more than the Copenhagen interpretation spiced up with some Heisenberg and Aquinas – an interpretation of an interpretation.
Furthermore, Smith entirely ignores the relevant phenomenon of decoherence, the existence of which is uncontroversial. Decoherence is the loss of the phase relationship between different states of the quantum subsystem by interaction with the environment, or with the measurement apparatus. It is the bane of quantum computing which requires the phase relationships to be maintained in the face of thermal and other environmental perturbations. The effect of decoherence is to reduce the quantum probabilities of the system to classical probabilities, and the theoretical basis of the process is well understood via, for example, von Neumann’s density matrix description. To be clear, because decoherence results from the entanglement of the quantum system with the environment, it doesn’t solve the measurement problem per se. Nevertheless, instantaneous and discontinuous wave function collapse as envisaged in the Copenhagen interpretation does not occur during decoherence, for example when a quantum particle is absorbed by a measurement apparatus. Instead, the entire system, the quantum subsystem plus the environment can be regarded as being entangled in a superposition of states with a vastly higher number of degrees of freedom. However, decoherence does explain the appearance of wave function collapse, since the unitary evolution (the uniquely defined evolution from a past to a future state) of the quantum subsystem described by the Schrödinger equation or the density operators, is interrupted by interaction of the quantum subsystem with the environment (or absorption of the quantum subsystem by the environment or detection apparatus) in a non-unitary manner. The pure quantum state of the quantum subsystem is therefore irreversibly lost, and the quantum subsystem falls into a mixed state. This interaction can be described either by the wave function or the density matrix formalism, and results in the quantum probabilities that are described by the subsystem wave function before environmental interaction, being reduced to classical probabilities after interaction, a process known as einselection. It also explains how a quantum subsystem entangled with a measuring apparatus, or any macroscopic object interacting with its environment, behaves as a classical statistical ensemble rather than a quantum superposition and thus appears to have collapsed into a state with a precise value for measured observables for each element of the subsystem. The localisation of macroscopic objects resulting from decoherence rapidly approaches the de Broglie wavelength with increasing object size, and the localisation occurs extremely rapidly, so that the superposition of states for a macroscopic object (or for a system of quantum objects absorbed by a measurement apparatus) cannot be practically observed either in time or space.
Although the precise significance of decoherence for resolving some of the philosophical problems of quantum mechanics is still a matter of debate, the fact remains that decoherence is an empirically verified physical phenomenon which creates difficulties for Smith’s description of reality. For example, as we have seen, the combination of the assumption that macroscopic objects obey quantum laws (albeit that they possess very large degrees of freedom), and the effect of decoherence on transforming the quantum probabilities of an ensemble to classical probabilities, explains the localisation and other classical observations of precise attributes of macroscopic objects (the localisation and other attributes become more precise and occur more quickly the larger the object, occurring in timescales and with precision indistinguishable from the classical case for objects larger than a few tens of nanometres). This is in stark contradiction to Smith’s suggestion that physical and corporeal objects are ontologically distinct, and that the wave function collapse is the actualisation of a potency, which results in the instantaneous change of ontological plane from the physical to the corporeal. Instead, what we see is that the same quantum description applies to both quantum and macroscopic objects, or to physical and corporeal in Smith’s language, with the classical probabilities and the localisation of corporeal objects explained by interaction with the environment and the influence of decoherence. It is a pity that Smith chose not to reveal this difficulty to his readers and declined to address the problem that the phenomenon poses for his thesis.
Do nucleons, electrons and atoms exist?
Let us now turn to another question, according to which Smith proposes that what he calls corporeal entities are not constituted by particles at all – that quantum particles cease to exist as particles once they are incorporated into a corporeal object[27]. I find this argument startling and entirely unconvincing. Take, for example, common salt, sodium chloride. A naturally occurring mineral crystal of sodium chloride, a halite crystal, more than large enough to be seen and therefore a corporeal object in Smith’s terms, has a basically cubic shape discernible by eye, which is one important and defining attribute of its substantial form. This is not an arbitrary property dictated from on high; there is a reason for it to be so, and the reason is that within the crystal, the sodium and chlorine ions are organised as two interpenetrating face centred cubic lattices, so that the nearest neighbours of each ion of one species are six ions of the other species arranged halfway along the lattice cell edges of the first species. This arrangement is expected because of the strong electrostatic attraction between the ions of the two species. That electrostatic attraction arises from the respective valency of atomic sodium and chlorine, which in turn is a consequence of the electronic structure of the atoms – the number of electrons in the outer or valence energy level of the atom[28]. Far from it being the case that the particles disappear on being incorporated into a corporeal object, it is the arrangement of the particles within the corporeal object, based on their properties, which gives rise to one of its key attributes – its shape. This arrangement can be probed and demonstrated by, for example, X-ray diffraction. Moreover, it has become possible in the last decade to image atoms in a lattice directly, including imaging the migration of individual atoms within the lattice in real time, using instruments such as the electron scanning tunnelling microscope (itself relying on a quantum effect), the field ion microscope, and the ptychographic electron microscope which renders extremely high resolution images down to the sub-atomic level. So much for particles disappearing in corporeal entities.
As an aside, we can consider another of the attributes of common salt crystals, an undeniable quality, its salty taste, which we perceive as a consequence of the presence of sodium cations from the salt dissolved in water being detected by dedicated cells on the tongue which make use of a cation channel, the epithelial sodium channel (encoded by four genes in humans, which are specific arrangements of a molecular structure in space on the sub-microscopic scale – see below for a further discussion of DNA). This is another example, to add to that of colour that we touched on earlier, that refutes Smith’s claim that qualitative attributes cannot be studied by physics.
While we are considering whether physical objects disappear or are subsumed into corporeal objects, let us consider the case of the structure of DNA, famously discovered by Crick, Watson, Wilkins and Franklin in 1953. It is a triumph of physics and biology that we understand the molecular basis of heredity, which is present in every cell of our bodies, and that of every cell in every other living creature on Earth. The famous last sentence of Watson and Crick’s 1953 Nature paper goes: “It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material”, and has proved not only to be accurately prophetic, but also links the functionality of heredity with the structure (the organisation in time and space) of an entity that Smith would presumably consider to be physical (in his terms – i.e. studied by physics and not perceptible to our unaided senses). Time and again, physicists and other scientists show that the sub-microscopic structure of things is not only real, but explains and determines the attributes, the qualities, the essence, the quiddity of the so-called corporeal objects which it constitutes. It seems after all that the world is built bottom up, not top down.
Smith’s Vertical Causation
In the final chapter of The Quantum Enigma, Smith introduces what he believes to be a new idea related to causality, which he calls vertical causation (VC). He contrasts VC with horizontal causation, which is that kind of causality studied by physics (and the other natural sciences) in which events in spacetime cause other events in spacetime according to the discoverable regularity described by the laws of physics. He proposes a categorically separate and higher form of causality, which operates outside time, and which constitutes his explanation for the instantaneous collapse of the wave function. His main argument in support of VC relies on the supposed instantaneity of the collapse, his assumption being that the collapse must be caused, but that the cause does not appear within the physical theory of quantum mechanics itself, and it must lie outside time, so it must therefore proceed from a transcendent plane which lies above and beyond normal reality. He likens it to an act of creativity. According to him, vertical causation is incapable, by definition, of being recognised or studied by physics or any form of natural science. Of course, all manner of charlatans[29] seek to place their claims beyond science, hoping to smuggle them past proper scrutiny. This is not to say that Smith is a charlatan, at least not knowingly, but any claim of this sort should be treated sceptically, should be defeasible and must be justified on some warrant other than its own assertion. Smith’s vertical causation does not meet these criteria.
We have already seen that Smith bases his argument on just one interpretation of quantum mechanics, in which he exploits the philosophically problematic collapse of the wave function, but that he ignores other, equally predictive interpretations that do not require this state selection. The notion that vertical causation is a concept which proceeds naturally and necessarily from empirical and theoretical quantum mechanics is therefore at best enfeebled, at worst defeated. Vertical causation is an idea that purports to solve the philosophical problems arising from one interpretation of quantum mechanics by the rather arbitrary introduction of a form of causality that not only lies beyond perception but cannot be recognised or studied by science. In truth this is nothing more nor less than a magical “explanation” explaining nothing. One understands the belief of some theists that the temporal and spatial world is constantly brought into being and is held in being by a transcendent non-temporal (and spatially unbound) entity, by God. Nevertheless, one would argue that Smith has not made the case that this continual putative act of creation is specifically manifested in the collapse of the wave function, the transition from potency to act, as he would have it, in the quantum domain; and he has certainly not developed an argument that is cogent on purely metaphysical rather than on a priori religious grounds.
We are accustomed to think of causality in the natural world, horizontal causation in Smith’s jargon, where one event causes the next according to discernible natural laws, backwards in a great chain to the start of time and reaching forwards into the distant future. The idea that a different form of transcendent causality lies at the heart of the natural world results in a startling epistemological crisis. According to Smith, every event in which quantum potentiae are actualised is caused vertically and instantaneously by an act of creation outside time and thus ever present. It is but a small step, or no step at all, to occasionalism, the doctrine which denies horizontal causation altogether, and which holds that every event is directly caused by God, and that the appearance of natural causality in the world is a consequence of God acting according to custom, but that it is possible for him to do otherwise. For example, Al Ghazali, the Islamic philosopher who first stated this position, gave the example of cotton in a fire – the cotton burns, not because of the fire, or the fact that the fire is hot, but because God directly causes it to burn, and this is so for every apparent causal chain. Natural cause is an illusion. The Catholic philosopher, Nicolas Malebranche, independently proposed occasionalism as a response to Cartesian dualism. On this idea, physics would not study the regularities of the physical world, and how one event causes the next, but instead, the habits or customs of God.
However, this epistemological crisis is avoided in the case of VC, because we see that the idea of a vertical cause of the collapse of the wave function is arbitrary and unnecessary. Decoherence, which we have discussed above, is not an instantaneous process, or a process outside time, as Smith claims for the apparently uncaused collapse of the wave function. A quantum subsystem can be completely coherent, or completely decoherent, or partially coherent. For example, Haroche and collaborators, in a seminal paper, were the first to report the measurement of a quantum subsystem in transition from complete coherence to decoherence[30]. It is clearly a physical process, amenable to physical experiment, which can be described by a physical theory based on various different but equivalent mathematical formalisms. The fact that the appearance of wave function collapse can be modelled and measured in time undermines Smith’s claim that it is a discontinuity which occurs instantaneously and outside time and that therefore explicable only as a creative, transcendental act.
Although believers are free to find their teleology wherever they will, the history of “God-of-the-gaps” arguments is not a happy one. In essence, Smith’s solution to the measurement problem and the other philosophical problems arising from quantum mechanics is, to put it grossly, “God does it”, to succumb to occasionalism, or something very like it. This is an unsatisfactory explanation for observations of the natural world, which, if accepted would dismantle the foundation of science. His assertion that the true solution lies beyond the remit of physics should not and will not cut the mustard. Theorists will continue to develop interpretations and extensions to quantum mechanics in the reasonable expectation that a quantum mechanical field theory will be found that is consistent with relativity. Whether a naively realist interpretation of quantum mechanics will ever prevail is still an open question, but that seems unlikely – there is no guarantee that human minds, which have evolved to deal with the everyday macroscopic domain, will prove capable of visualising the quantum domain on similar terms. While Smith is right to question whether physics is a project which can ever understand the world without residue, it does not follow that the residue de jour should be identified with the supernatural. It certainly does not follow that quantum collapse itself is to be associated with a supernatural act, or, as Smith would put it, with vertical causation.
In Conclusion
So, there are several flaws in The Quantum Enigma. From the outset, Smith’s campaign against Cartesian dualism is ill targeted, since the actual default stance of scientists today is not substance dualism at all, but a form of Realist Monism. His attempt to establish an ontological distinction between the “physical” and the “corporeal” fails on the terms of his own examples, and its failure undermines the entire edifice of his argument. He conflates one subset of physics, quantum mechanics, with the entire discipline, which further confounds the ontological distinction. In direct contradiction to his ideas, we are forced to conclude that the apple as perceived by the intellect, with all its rich associations, and the apple as studied by physics and biology with measurement and reason, is one and the same ontological entity. Next, his proposition that quantum objects in a state of superposition are equivalent to objects in a state of potency, and that the collapse of the wave function is an instantaneous change of ontological plane, from the physical to the corporeal, from potency to act, suffers from the weakness that it depends on one interpretation of quantum mechanics amongst many, and that it does not proceed naturally and necessarily from observation, from empirical physics. Although it is not precluded by the physics, Smith gives us no compelling reason to accept his interpretation of an interpretation over any other. Furthermore, Smith does not acknowledge the existence of the phenomenon of decoherence, which explains the appearance of wave function collapse in purely physical terms and does not acknowledge the difficulties it presents for his thesis. Finally, the concept of vertical causation, which is built on the notion that the wave function collapse is outside of time, also depends on one interpretation of quantum mechanics amongst many, is refuted by the phenomenon of decoherence, and is, moreover, a “God-of-the-gaps” argument.
Ultimately, The Quantum Enigma is disappointing, because Smith declines to engage in detail both with relevant ongoing philosophical discourse and with the science[31]. He limits his discussion only to those threads of philosophy and science from which he believes he can weave his cloth, but the weave turns out to be a fatally loose one. The scholarly tone of his book hides a polemic tract under a superficial gloss. Despite its opaque language and dense construction, it never properly engages with potential counterarguments or with the extensive literature dealing with its various matters. In the final analysis, it is simply lightweight. This view of Smith’s work is reinforced by his more recent publications, and his claim to present a radical, transformational thesis remains unrealised.
[1] Although Smith is a Roman Catholic, he is a member of the traditional, esoteric and perennialist metaphysical school (alongside philosophers such as René Guénon, Frithjof Schuon, Anand Coomaraswamy, Harry Oldmeadow and Hossein Nasr). This school condemns all aspects of modernity and values
pursuits such as arcane symbolism, numerology, sacred geometry, alchemy, astrology and other secret “knowledge” accessible only to initiates.
[2] The Quantum Enigma, 2005, p20
[3] Husserl, Cartesian Meditations, available on-line.
[4] Heisenberg, Physics and Philosophy, first published 1962, Penguin Classics edition 2000, p39.
[5] The Quantum Enigma, 2005, p24
[6] Smith is so opposed to Descartes’ notions that he claims, later in the book, that Descartes invented analytical geometry to destroy the idea of potency and act in mathematics by coordinatizing the continuum, and that Descartes’ primary motivation was to “extirpate” the continuum, which Smith sees as the material principle, in the sense of scholastic matter, in the quantitative domain. Whatever, the merits or otherwise of Descartes’ philosophy, the idea that analytical geometry was invented primarily as an attack on traditional metaphysics is grotesque.
[7] The Quantum Enigma, 2005, p26
[8] At this point in his argument, and as an aside, Smith proposes that direct perception, without instrumentation, of corporeal objects and their essence is the foundation of traditional sciences, for example the five elements of the ancient cosmologies, or the five bhudas of Hindu doctrine.
[9] There is an extensive science dedicated to understanding the psychophysical aspects of direct weight perception – see, for example, Jones (1986), Perception of force and weight: Theory and research, Psychol. Bull. 100, 29–42. The integration of visual and touch perception in lift planning has been studied, for example, Jeannerod et al (1995), Grasping objects: the cortical mechanisms of visuomotor transformation, Trends Neurosci. 18, 314–320
[10] In fact, not a one dimensional, but a multi-dimensional continuum, since pure spectral colours are mixed in most naturally occurring colours
[11] Smith, The Quantum Enigma, p14
[12] Feynman’s monologue on the subject is available in many places on the web, for example: https://www.brainpickings.org/2013/01/01/ode-to-a-flower-richard-feynman/
[13] Richard Dawkins, Unweaving the Rainbow; Science, Delusion and the Appetite for Wonder, 1998
[14] Smith, The Quantum Enigma, p34
[15] René Guénon, The Reign of Quantity and the Signs of the Times, first published in French, 1945; third edition in English, Sophia Perennis 1995.
[16] The fact that Smith declares an overwhelming preference for Guénon’s philosophy, as it pertains to modernity, over the philosophy of Jacques Maritain, will tell all educated Catholics what they need to know about Smith’s predilections – Smith, Science and Myth, 2010 revised 2012, Angelico Press/Sophia Perennis p31.
[17] For an example of Smith’s adherence to numerology and astrology, see Smith, Science and Myth, Chapter 6
[18] Heisenberg, Physics and Philosophy, first published 1962, Penguin Classics edition 2000, p22
[19] Smith, The Quantum Enigma, p74
[20] Smith refers to the collapse of the “state vector”. The state vector and the wave function are different but related mathematical concepts which are used to describe the behaviour of quantum objects, and for our purposes the phrases “collapse of the state vector” and “collapse of the wave function” are synonymous and refer to the same event – I prefer the latter phrase because it is in more common use.
[21] Smith’s description of the Young’s experiment is technically incorrect in one respect important for a correct understanding of it. As his error does not fundamentally affect his argument or my response, but only the understanding of his readers, we needn’t explore his error in detail, except to register surprise that a physicist would make such an elementary error in a publication.
[22] If one applies the Schrödinger equation, which describes the evolution of the probability distribution for the state of a quantum particle over time, to the interaction of particles with Young’s slits, one recovers a probability distribution for the location of a particle at the detection screen which matches, exactly, the wave interference intensity in the classical wave case (and this is true for any interaction of quantum particles which have a classical wave analogue described by classical diffraction and interference theory. This is a necessary condition for the Schrödinger equation to be an accurate description of the probability of particle location, since the classical case can be regarded as a very large ensemble of particles arriving at the screen at every moment). Note that there is a technical issue with naïvely using the Schrödinger equation to model the behaviour of photons and other relativistic particles, but that need not trouble us here.
[23] Note that there are also different mathematical formulations of quantum mechanics. The Schrödinger formulation and the Heisenberg matrix mechanics were the first complete formulations of non-relativistic QM. Later formulations include Feynman’s path integral. These, and other, formulations can be shown to be fundamentally equivalent, but each are useful in solving different problems. Formulations are not the same as interpretations, the former being transactional and the latter philosophical, but different formulations emphasise different aspects of various interpretations.
[24] Smith’s lack of proper attention to facts or views that oppose his position, which I note here, extends across much of his discourse, and includes his silence on the philosophies of realism, monism and emergentism, modern neurophysiology and other aspects of the science of consciousness, interpretations of QM other than the Copenhagen interpretation, and decoherence in quantum theory. It extends to ignoring the devastating criticism of the inconsequential rabble whom he quotes in his support, which includes such luminaries as Berthault, Popov, Gentry, Humphreys, Johnson and their ilk. In a striking display of projection, in Science and Myth p194 he accuses Stephen Hawking of the same offence, listing a raggle-taggle band of pseudoscientific fellow travellers whom he thinks Hawking should have noticed. On the other hand, the science and philosophy he sweeps under the carpet are espoused by the leading scholars of the day in the appropriate disciplines.
[25] If Bohmian mechanics is deterministic and counterfactually definite, why is it not the preferred interpretation of QM? The answer is that, at least in its original form, the trajectories of the particles are distinctly non-Newtonian. Furthermore, it is non-local and therefore cannot easily be reconciled with special relativity. Versions of Bohmian mechanics which are Lorentz covariant and built on a Riemannian space-time have been and are being developed, but the success of these extensions remain in dispute.
[26] Objective Collapse theories, such as the Ghirardi-Rimini-Weber interpretation regard quantum mechanics as an incomplete theory and, to that extent, are actual physical theories, since they hypothesise extensions to it.
[27] Smith, The Quantum Enigma, p118
[28] The outer shell of sodium contains one electron, so the sodium atom readily gives up that electron to form a positive ion. The outer shell of chlorine contains seven electrons, so readily accepts one electron to complete the shell thus forming a negative ion. Sodium chloride is an ionic crystal in which the chlorine atom accepts an electron from a sodium atom to form an electrostatically bound lattice of positive sodium and negative chlorine ions.
[29] For example, adherents of homeopathy, astrology, psychic phenomena, crystal healing, Reiki and so forth attempt to bypass scientific scrutiny by declaring that their claims work in ways that are inaccessible to scientific validation.
[30] Haroche et al, Observing the Progressive Decoherence of the “Meter” in a Quantum Measurement. Phys. Rev. Lett. 77 (24): 4887–4890
[31] Smith provides an appendix in The Quantum Enigma in which he lays out the mathematical formalism of quantum mechanics based on the Schrödinger approach. It is not clear what he hopes to achieve by this – it offers no value to those equipped to understand it since it is entirely derivative and extremely elementary – there is nothing that cannot be found in the first few pages of any relevant undergraduate text; and clearly it offers no value to those unequipped or unwilling to engage with it. Since The Quantum Enigma depends on interpretations of quantum mechanics, Smith would have been better advised to set out a comprehensive comparison of the various interpretations, discuss their philosophical implications, and face up to those implications for his thesis, instead of pretending that only one exists. He should also have confronted the phenomenon of decoherence.
This entry was posted in Uncategorized. Bookmark the permalink. |
70701930c3b83d21 |
Note: I'd prefer an explanation that is easily understood by laypeople; terms specific to quantum computing should preferably be explained, in relatively simple terms.
• 8
$\begingroup$ To those who wish to answer this question: It would be great if you point out the difference between classical and quantum probabilities in your answers. That is, how is a quantum state like $\frac{1}{\sqrt{2}}|0\rangle + \frac{1}{\sqrt{2}}|1\rangle$ different from a coin which when tossed in the air has a $50-50$ chance of turning out to be heads or tails. Why can't we say that a classical coin is a "qubit" or call a set of classical coins a system of qubits? $\endgroup$ – Sanchayan Dutta Jun 18 '18 at 16:26
• 1
$\begingroup$ Your question has attracted a lot of negatively voted answers including mine, which is quite discouraging considering how much time people spent on the answers. In most SE's you're required to do at least some basic research on your own before asking a question. The first paragraph of your question suggests that you haven't read about what "quantum" is. There is already a LOT of introductory texts on quantum computing where the answer to your question is provided in the first few pages. $\endgroup$ – user1271772 Jun 19 '18 at 14:44
• 3
$\begingroup$ Possible duplicate of What is the difference between a qubit and classical bit? $\endgroup$ – MEE Jun 19 '18 at 16:16
• 2
$\begingroup$ When you write "easily understood by laypeople", just how "lay" are we talking? Can one assume they know about Huygen's principle? About complex numbers? About vector spaces? About momentum? About differential equations? About boolean logic? This seems to me a very vague constraint. I expect that there is a set of mathematical prerequisites, without which any description of 'a qubit' would amount to some vaguely technical sounding words which fail to actually convey anything in a convincing way. $\endgroup$ – Niel de Beaudrap Jun 19 '18 at 17:07
• 3
This is a good question and in my view gets at the heart of a qubit. Like the comment by @Blue, it's not that it can be an equal superposition as this is the same as a classical probability distribution. It is that it can have negative signs.
Take this example. Imagine you have a bit in the $0$ state and represent it as vector $\begin{bmatrix}1 \\0 \end{bmatrix}$ and then you apply a coin flipping operation which can be represented by a stochastic matrix $\begin{bmatrix}0.5 & 0.5 \\0.5 & 0.5 \end{bmatrix}$ this will make a classical mixture $\begin{bmatrix}0.5 \\0.5 \end{bmatrix}$. If you apply this twice it will still be a classical mixture $\begin{bmatrix}0.5 \\0.5 \end{bmatrix}$.
Now lets go to the quantum case and start with a qubit in the $0$ state which again is represented by $\begin{bmatrix}1 \\0 \end{bmatrix}$. In quantum, operations are represented by a unitary matrix which has the property $U^\dagger U = I$. The simplest unitary to represent the action of a quantum coin flip is the Hadamard matrix $\begin{bmatrix}\sqrt{0.5} & \sqrt{0.5} \\\sqrt{0.5} & -\sqrt{0.5} \end{bmatrix}$ where the first column is defined so that after one operation it makes the state $|+\rangle =\begin{bmatrix}\sqrt{0.5} \\\sqrt{0.5} \end{bmatrix}$, then the second column must be $\begin{bmatrix}\sqrt{0.5} & a \\\sqrt{0.5} & b \end{bmatrix}$ where $|a|^2 = 1/2$, $|b|^2 = 1/2$ and $ab^* = -1/2$. A solution to this is $a =\sqrt(0.5)$ and $b=-a$.
Now lets do the same experiment. Applying it once gives $\begin{bmatrix}\sqrt{0.5} \\\sqrt{0.5} \end{bmatrix}$ and if we measured (in the standard basis) we would get half the time 0 and the other 1 (recall in quantum Born rule is $P(i) = |\langle i|\psi\rangle|^2$ and why we need all the square roots). So it is like the above and has a random outcome.
Lets apply it twice. Now we would get $\begin{bmatrix} 0.5+0.5 \\0.5-0.5\end{bmatrix}$. The negative sign cancels the probability of observing the 1 outcome and a physicist we refer to this as interference. It is these negative numbers that we get in quantum states which cannot be explained by probability theory where the vectors must remain positive and real.
Extending this to n qubits gives you a theory that has an exponential that we can't find efficient ways to simulate.
This is not just my view. I have seen it shown in the talks by Scott Aaronson and I think its best to say quantum is like “Probability theory with Minus Signs” (this is a quote by Scott).
I am attaching the slides I like to give for explaining quantum (if it is not standard to have slides in an answer I am happy to write the math out to get across the concepts) enter image description here
| improve this answer | |
• $\begingroup$ I see in the other question people dont understand what i mean by interference. I'm new to stack exchange but not quantum so how do you want me to fill in more details. Either edit above or post another comment. $\endgroup$ – Jay Gambetta Jun 24 '18 at 16:34
• $\begingroup$ ok @blue i just edit above and you can edit how you like. $\endgroup$ – Jay Gambetta Jun 24 '18 at 18:27
• $\begingroup$ Thanks for the edit! Can you please mention the source of the slides? $\endgroup$ – Sanchayan Dutta Jun 24 '18 at 18:32
• $\begingroup$ How do i do that. The source is me except the one i remade from seeing Scott talk. $\endgroup$ – Jay Gambetta Jun 24 '18 at 18:34
• $\begingroup$ @JayGambetta I meant this slide: i.stack.imgur.com/rvoOJ.png in your answer. Can you add the source from where you got it? $\endgroup$ – Sanchayan Dutta Jun 24 '18 at 18:35
I'll probably be expanding this more (!) and adding pictures and links as I have time, but here's my first shot at this.
Mostly math-free explanation
A special coin
Let's begin by thinking about normal bits. Imagine this normal bit is a coin, that we can flip to be heads or tails. We'll call heads equivalent to "1" and tails "0". Now imagine instead of just flipping this coin, we can rotate it - 45${}^\circ$ above horizontal, 50$^\circ$ above horizontal, 10$^\circ$ below horizontal, whatever - these are all states. This opens up a huge new possibility of states - I could encode the whole works of Shakespeare into this one coin this way.
But what's the catch? No such thing as a free lunch, as the saying goes. When I actually look at the coin, to see what state it's in, it becomes either heads or tails, based on probability - a good way to look at it is if it's closer to heads, it's more likely to become heads when looked at, and vice versa, though there's a chance the close-to-heads coin could become tails when looked at.
Further, once I look at this special coin, any information that was in it before can't be accessed again. If I look at my Shakespeare coin, I just get heads or tails, and when I look away, it still is whatever I saw when I looked at it - it doesn't magically revert to Shakespeare coin. I should note here that you might think, as Blue points out in the comments, that
Given the huge advancement in modern day technology there's nothing stopping me from monitoring the exact orientation of a coin tossed in air as it falls. I don't necessarily need to "look into it" i.e. stop it and check whether it has fallen as "heads" or "tails".
This "monitoring" counts as measurement. There is no way to see the inbetween state of this coin. None, nada, zilch. This is a bit different from a normal coin, isn't it?
So encoding all the works of Shakespeare in our coin is theoretically possible but we can never truly access that information, so not very useful.
Nice little mathematical curiosity we've got here, but how could we actually do anything with this?
The problem with classical mechanics
Well, let's take a step back a minute here and switch to another tack. If I throw a ball to you and you catch it, we can basically model that ball's motion exactly (given all parameters). We can analyze its trajectory with Newton's laws, figure out its movement through the air using fluid mechanics (unless there's turbulence), and so forth.
So let's set us up a little experiment. I've got a wall with two slits in it and another wall behind that wall. I set up one of those tennis-ball-thrower things in the front and let it start throwing tennis balls. In the meantime, I'm at the back wall marking where all our tennis balls end up. When I mark this, there are clear "humps" in the data right behind the two slits, as you might expect.
Now, I switch our tennis-ball-thrower to something that shoots out really tiny particles. Maybe I've got a laser and we're looking where the photons look up. Maybe I've got an electron gun. Whatever, we're looking at where these sub-atomic particles end up again. This time, we don't get the two humps, we get an interference pattern.
enter image description here
Does that look familiar to you at all? Imagine you drop two pebbles in a pond right next to each other. Look familiar now? The ripples in a pond interfere with each other. There are spots where they cancel out and spots where they swell bigger, making beautiful patterns. Now, we're seeing an interference pattern shooting particles. These particles must have wave-like behavior. So maybe we were wrong all along. (This is called the double slit experiment.)Sorry, electrons are waves, not particles.
Except...they're particles too. When you look at cathode rays (streams of electrons in vacuum tubes), the behavior there clearly shows electrons are a particle. To quote wikipedia:
So...they're both. Or rather, they're something completely different. That's one of several puzzles physicists saw at the beginning of the twentieth century. If you want to look at some of the others, look at blackbody radiation or the photoelectric effect.
What fixed the problem - quantum mechanics
These problems lead us to realize that the laws that allow us to calculate the motion of that ball we're tossing back and forth just don't work on a really small scale. So a new set of laws were developed. These laws were called quantum mechanics after one of the major ideas behind them - the existence of fundamental packets of energy, called quanta.
The idea is that I can't just give you .00000000000000000000000000 plus a bunch more zeroes 1 Joules of energy - there is a minimum possible amount of energy I can give you. It's like, in currency systems, I can give you a dollar or a penny, but (in American money, anyway) I can't give you a "half-penny". Doesn't exist. Energy (and other values) can be like that in certain situations. (Not all situations, and this can occur in classical mechanics sometimes - see also this; thanks to Blue for pointing this out.)
So anyway, we got this new set of laws, quantum mechanics. And the development of those laws is complete, though not completely correct (see quantum field theories, quantum gravity) but the history of their development is kind of interesting. There was this guy, Schrodinger, of cat-killing (maybe?) fame, who came up with the wave equation formulation of quantum mechanics. And this was preferred by a lot of physicists preferred this, because it was sort of similar to the classical way of calculating things - integrals and hamiltonians and so forth.
Another guy, Heisenberg, came up with another totally different way of calculating the state of a particle quantum-mechanically, which is called matrix mechanics. Yet another guy, Dirac, proved that the matrix mechanical and wave equation formulations were equal.
So now, we must switch tacks again - what are matrices, and their friend vectors?
Vectors and matrices - or, some hopefully painless linear algebra
Vectors are, at their simplest, arrows. I mean, they're on a coordinate plane, and they're math-y, but they're arrows. (Or you could take the programmer view and call them lists of numbers.) They're quantities that have a magnitude and a direction. So once we have this idea of vectors...what might we use them for? Well, maybe I have an acceleration. I'm accelerating to the right at 1 m/s$^2$, for example. That could be represented by a vector. How long that arrow is represents how quickly I am accelerating, the arrow would be pointing right along the x-axis, and by convention, the arrow's tail would be situated at the origin. We notate a vector by writing something like [2, 3] which would notate a vector with its tail at the origin and its point at (2, 3).
So we have these vectors. What sorts of math can I do with them? How can I manipulate a vector? I can multiply vectors by a normal number, like 3 or 2 (these are called scalars), to stretch it, shrink it (if a fraction), or flip it (if negative). I can add or subtract vectors pretty easily - if I have a vector (2, 3) + (4, 2) that equals (6, 5). There's also stuff called dot products and cross products that we won't get into here - if interested in any of this, look up 3blue1brown's linear algebra series, which is very accessible, actually teaches you how to do it, and is a fabulous way to learn about this stuff.
Now let's say I have one coordinate system, that my vector is in, and then I want to move that vector to a new coordinate system. I can use something called a matrix to do that. Basically we can define in our system two vectors, called $\hat{i}$ and $\hat{j}$, read i-hat and j-hat (we're doing all this in two dimensions in the real plane; you can have higher dimension vectors with complex numbers ($\sqrt{-1} = i$) as well but we're ignoring them for simplicity), which are vectors that are one unit in the x direction and one unit in the y direction - that is, (0, 1) and (1, 0).
Then we see where i-hat and j-hat end up in our new coordinate system. In the first column of our matrix, we write the new coordinates of i-hat and in the second column the new coordinates of j-hat. We can now multiply this matrix by any vector and get that vector in the new coordinate system. The reason this works is because you can rewrite vectors as what are called linear combinations. This means that we can rewrite say, (2, 3) as 2*(1, 0) + 3*(0, 1) - that is, 2*i-hat + 3*j-hat. When we use a matrix, we're effectively re-multiplying those scalars by the "new" i-hat and j-hat. Again, if interested, see 3blue1brown's videos. These matrices are used a lot in many fields, but this is where the name matrix mechanics comes from.
Tying it all together
Now matrices can represent rotations of the coordinate plain, or stretching or shrinking the coordinate plane or a bunch of other things. But some of this behavior...sounds kind of familiar, doesn't it? Our little special coin sounds kind of like it. We have this rotation idea. What if we represent the horizontal state by i-hat, and the vertical by j-hat, and describe what the rotation of our coin is using linear combinations? That works, and makes our system much easier to describe. So our little coin can be described using linear algebra.
What else can be described linear algebra and has weird probabilities and measurement? Quantum mechanics. (In particular, this idea of linear combinations becomes the idea called a superposition, which is where the whole idea, oversimplified to the point it's not really correct, of "two states at the same time" comes from.) So these special coins can be quantum mechanical objects. What sorts of things are quantum mechanical objects?
• photons
• superconductors
• electron energy states in an atom
Anything, in other words, that has the discrete energy (quanta) behavior, but also can act like a wave - they can interfere with one another and so forth.
So we have these special quantum mechanical coins. What should we call them? They store an information state like bits...but they're quantum. They're qubits. And now what do we do? We manipulate the information stored in them with matrices (ahem, gates). We measure to get results. In short, we compute.
Now, we know that we cannot encode infinite amounts of information in a qubit and still access it (see the notes on our "shakespeare coin"), so what then is the advantage of a qubit? It comes in the fact that those extra bits of information can affect all the other qubits (it's that superposition/linear combination idea again), which affects the probability, which then affects your answer - but it's very difficult to use, which is why there are so few quantum algorithms.
The special coin versus the normal coin - or, what makes a qubit different?
So...we have this qubit. But Blue brings up a great point.
There are several differences - the way that measurement works (see the fourth paragraph), this whole superposition idea - but the defining difference (Mithrandir24601 pointed this out in chat, and I agree) is the violation of the Bell inequalities.
Let's take another tack. Back when quantum mechanics was being developed, there was a big debate. It started between Einstein and Bohr. When Schrodinger's wave theory was developed, it was clear that quantum mechanics would be a probabilistic theory. Bohr published a paper about this probabilistic worldview, which he concluded saying
The idea of determinism has been around for a while. Perhaps one of the more famous quotes on the subject is from Laplace, who said
The idea of determinism is that if you know all there is to know about a current state, and apply the physical laws we have, you can figure out (effectively) the future. However, quantum mechanics decimates this idea with probability. "I myself am inclined to give up determinism in the world of atoms." This is a huge deal!
Albert Einstein's famous response:
(Bohr's response was apparently "Stop telling God what to do", but anyway.)
For a while, there was debate. Hidden variable theories came up, where it wasn't just probability - there was a way the particle "knew" what it was going to be when measured; it wasn't all up to chance. And then, there was the Bell inequality. To quote Wikipedia,
In its simplest form, Bell's theorem states
And it provided a way to experimentally check this. It's true - it is pure probability. This is no classical behavior. It is all chance, chance that affects other chances through superposition, and then "collapses" to a single state upon measurement (if you follow the Copenhagen interpretation). So to summarize: firstly, measurement is fundamentally different in quantum mechanics, and secondly, that quantum mechanics is not deterministic. Both of these points mean that any quantum system, including a qubit, is going to be fundamentally different from any classical system.
A small disclaimer
enter image description here
As xkcd wisely points out, any analogy is an approximation. This answer isn't formal at all, and there's a heck of a lot more to this stuff. I'm hoping to add to this answer with a slightly more formal (though still not completely formal) description, but please keep this in mind.
• Nielsen and Chuang, Quantum Computing and Quantum Information. The bible of quantum computing.
• 3blue1brown's linear algebra and calculus courses are great for the math.
• Michael Nielsen (yeah, the guy who coauthored the textbook above) has a video series called Quantum Computing for the Determined. 10/10 would recommend.
• quirk is a great little simulator of a quantum computer that you can play around with.
• I wrote some blog posts on this subject a while back (if you don't mind reading my writing, which isn't very good) that can be found here which attempts to start from the basics and work on up.
| improve this answer | |
First let me give examples of classical bits:
• In a CPU: low voltage = 0, high voltage = 1
• In a hard drive: North magnet = 0, South magnet = 1
• In a barcode on your library card: Thin bar = 0, Thick bar = 1
• In a DVD: Absence of a deep microscopic pit on the disk = 0, Presence = 1
In every case you can have something in between:
• If "low voltage" is 0 mV, and "high voltage" is 1 mV, you can have a medium voltage of 0.5 mV
• You can have a magnet polarized in any direction, such as North-West
• You can have lines in a barcode that are of any width
• You can have pits of various depths on the surface of a DVD
In quantum mechanics things can only exist in "packages" called "quanta". The singular of "quanta" is "quantum". This means for the barcode example, if the thin line was one "quantum", the thick line can be two times the size of the thin line (two quanta), but it cannot be 1.5 times the thickness of the thin line. If you look at your library card you will notice that you can draw lines that are of thickness 1.5 times the size of the thin lines if you want to, which is one reason why barcode bits are not qubits.
There do exist some things in which the laws of quantum mechanics do not permit anything between the 0 and the 1, some examples are below:
• spin of an electron: It's either up (0) or down (1), but cannot be in between.
• energy level of an electron: 1st level is 0, 2nd level is 1, there is no such thing as 1.5th level
I have given you two examples of what a qubit can be physically: spin of an electron, or energy level of an electron.
What purpose does it serve in quantum computing?
The reason why the qubit examples I gave come in quanta are because they exist as solutions to something called the Schrödinger Equation. Two solutions to the Schrödinger equation (the 0 solution, and the 1 solution) can exist at the same time. So we can have 0 and 1 at the same time. If we have two qubits, each can be in 0 and 1 at the same time, so collectively we can have 00, 01, 10, and 11 (4 states) at the same time. If we have 3 qubits, each of them can be in 0 and 1 at the same time, so we can have 000, 001, 010, 011, 100, 101, 110, 111 (8 states) at the same time. Notice that for $n$ qubits we can have $2^n$ states at the same time. That is one of the reasons why quantum computers are more powerful than classical computers.
| improve this answer | |
A qubit is a two-dimensional quantum system, and the quantum generalization of a bit. Like bits, qubits can be in the states 0 and 1. In quantum notation, we write these as $|0\rangle$ and $|1\rangle$. They can also be in superposition states such as
$$ |\psi_0 \rangle = \alpha |0\rangle + \beta |1\rangle$$
Here $\alpha$ and $\beta$ are complex numbers in general. But for this answer, I'll just assume they are normal real numbers. The name I've given this state, $|\psi_0 \rangle$, is just for convenience. It has no deeper meaning.
Extracting an output from a qubit is done by a process known as measurement. The most common measurement is what we call the $Z$ measurement. This means just asking the qubit whether it is 0 or 1. If it is in a superposition state, such as the one above, the output will be random. You'll get 0 with probability $\alpha^2$ and 1 with probability $\alpha^2$ (so clearly these numbers need to satisfy ($\alpha^2+\beta^2=1$).
This might make it seem that superpositions are just random number generators, but that isn't the case. For every $ \alpha |0\rangle + \beta |1\rangle$ , we can construct the following state
$$ |\psi_1 \rangle = \beta |0\rangle - \alpha |1\rangle$$
This is as different to $|\psi_0\rangle$ as $|0\rangle$ is to $|1\rangle$. We call it a state that is orthogonal to $|\psi_0\rangle$.
With this we can define an alternative measurement that looks at whether our qubit is $|\psi_0\rangle$ or $|\psi_1\rangle$. For this measurement, it is the $|\psi_0\rangle$ and $|\psi_1\rangle$ states that give us definite answers. For other states, such as $|0\rangle$ and $|1\rangle$, we'd get random outputs. This is because they can be thought of as superpositions of $|\psi_0\rangle$ and $|\psi_1\rangle$.
So, trying to summarize a little, qubits are objects that we can use to store a bit. We usually do this in the states $|0\rangle$ and $|1 \rangle$, but in fact, we could choose to do it in any of the infinite possible pairs of orthogonal states. If we want to get the bit out again with certainty, we have to measure according to the encoding we used. Otherwise, there will always be a degree of randomness. For more detail on all this, you can check out a blog post I once wrote.
To start getting interesting things happing, we need more than one qubit. Since $n$ bits can be made into $2^n$ different bit strings, there is an exponentially large number of orthogonal states that can be included in our superpositions of $n$ qubits. This is the space in which we can do all the tricks of quantum computation.
But as for how that works, I'll have to refer you to the rest of the questions and answers in this Stack Exchange.
| improve this answer | |
All we observe in quantum technologies (photons, atoms, etc) are bits (either a 0 or a 1).
At the essence, no one really knows what a quantum bit is. Some people say it's an object that is "both" 0 and 1; others say it's about things to do with parallel universes; but physicists don't know what it is, and have come up with interpretations that are not proven.
The reason for this "confusion" is due to two factors:
(1) One can get remarkable tasks accomplished which cannot be explained by thinking of the quantum technology in terms of normal bits. So there must be some extra element involved which we label "quantum" bit. But here's the critical piece: this extra "quantum" element cannot be directly detected; all we observe are normal bits when we "look" at the system.
(2) One way to "see" this extra "quantum" stuff is through maths. Hence a valid description of a qubit is mathematical, and every translation of that is an interpretation that has not yet been proven.
In summary, no one knows what quantum bits are. We know there's something more than bits in quantum technologies, which we label as "quantum" bit. And so far, the only valid (yet unsatisfying) description is mathematical.
Hope that helps.
| improve this answer | |
A qubit (quantum bit) is a quantum system that can be fully described by ("lives in") a 2-dimensional complex vector space.
However, much more than that is required to do computations. There needs to exist two orthogonal basis vectors in that vector space, call them $|0\rangle$ and $|1\rangle$, that are stable in the sense that you can set the system very precisely to $|0\rangle$ or to $|1\rangle$, and it will stay there for a long time. This is easier said than done because unless noise is reduced somehow, it will cause the state to drift gradually so that it contains a component along both the $|0\rangle$ and $|1\rangle$ dimensions.
To do computations, you must also be able to induce a "complete" set of operations acting on one or two qubits. When you are not inducing an operation, qubits should not interact with each other. Unless interaction with the environment is suppressed, qubits will interact with each other.
A classical bit, by the way, is much simpler than a qubit. It's a system that can be described by a boolean variable
| improve this answer | |
Your Answer
|
7b9fe5f947433979 | Acta Scientific Paediatrics (ISSN: 2581-883X)
Review Article Volume 3 Issue 8
Obstructive Sleep Apnea in Children: An An Electron-based Paediatric Pulmonary Magnetic Resonance Imaging Device to Avoid Administering General Anaesthesia to Paediatric Patients while being Imaged by Exploiting the ‘Celalettin Tunnel Conjecture’
Metin Celalettin1* and Horace King2
1TEPS, Victoria University, Australia
2College of Engineering and Science, Victoria University, Australia
*Corresponding Author: Metin Celalettin, TEPS, Victoria University, Australia.
Received: June 26, 2020; Published: July 30, 2020
The ‘Celalettin-Field Quantum Observation Tunnel’ (Celalettin Tunnel) is a quantum observation technique. It is within a pneumatic manifold of Euclidean space where the randomness of particle Orbital Angular Momentum (OAM) is mitigated via electric polarization. It is described by the Celalettin Tunnel Conjecture: The presence of an electric field affects the nuclear spin of the particles within the pneumatic manifold. The manifold, namely the IC-Manifold, or Invizicloud© is unique as its axioms are a combination of classical and quantum non-logical parameters. The IC-Manifold has a variable density and exists only according to ‘Celalettin’s two rules of quantum interaction:
• Quantum interaction causes quantum observation during fundamental particle interactions with orbital angular momentum electric polarized atoms within the IC-Manifold causing depolarization.
• The photoelectric effect is not limited to solids but can occur in an IC-Manifold.
The ‘Celalettin Tunnel Conjecture’ can be exploited to make proposals to ‘Magnetic Resonance Imaging (MRI) machines, such that it could develop an MRI for paediatric use. Paediatric patients are typically administered a General Anaesthesia when undergoing MRI imaging due to the requirement to stay still for extended periods. MRI developed from NMR uses the same phenomena to identify chemical structures based on a spectrum. A technique; in vivo MR spectrograph allows chemical identification on specific parts of the brain. Looking at whether the cells in a brain tumour contain alpha-hydroxy glutaric acid differentiates gliomas that have a mutation in the IDH1 or IDH2 gene for example. Pulse sequences to glean biochemical information non-invasively can be recalibrated for different patients.
In an MRI, the flip angle is the rotation of the nuclear spin vector relative to the main magnetic field. To improve the signal with an MRI, the flip angle needs to be chosen using the Ernst angle. A 90° flip angle using the Ernst angle will yield the maximum signal intensity (or signal-to-noise ratio) per number of averaged Free Induction Decays (FID)s. The flips are done over and over against while the patient stays still, and the average number of nuclear spin ensembles is taken to produce the image. This can involve the patient remaining still for up to five minutes.
This study explores a 180° flip angle could be achieved, then rather than taking several measurements over the five-minute imaging time experienced by the patient, it could produce a decisive image one measurement. The way to do this would be to focus on the electron rather than the proton. Free electrons are not only aligned with a magnetic field but can be manipulated to be pulled into the direction of the south pole.
Keywords: Celalettin Tunnel; Magnetic Resonance; Anaesthesia
1. E Raicher., et al. "The Lagrangian formulation of strong-field quantum electrodynamics in a plasma”. Physics of Plasmas5 (2014): 053103.
2. M Lanzagorta. "Quantum radar”. Synthesis Lectures on Quantum Computing1 (2011): 1-139.
3. Salerno M., et al. "Emphysema: hyperpolarized helium 3 diffusion MR imaging of the lungs compared with spirometric indexes—initial experience”. Radiology1 (2002): 252-260.
4. Walker TG and Happer W. “Spin-exchange optical pumping of noble-gas nuclei”. Reviews of Modern Physics2 (1997): 629.
5. Celalettin M and King H. “The ‘Celalettin-Field Quantum Observation Tunnel’; a Quantum Communication Countermeasure Speculative Structure”. Scholar Journal of Applied Sciences and Research 9 (2009): 5-9.
6. Bassi K., et al. "Models of wave-function collapse, underlying theories, and experimental tests”. Reviews of Modern Physics 2 (2013): 471.
7. Gaëtan., et al. "Observation of collective excitation of two individual atoms in the Rydberg blockade regime”. Nature Physics2 (2009): 115.
8. WH Zurek. "Decoherence and the transition from quantum to classical—revisited”. in Quantum Decoherence: Springer (2006): 1-31.
9. Heisenberg. "Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik”. in Original Scientific Papers Wissenschaftliche Originalarbeiten: Springer (1985): 478-504.
10. P Christillin. "Nuclear Compton scattering". Journal of Physics G: Nuclear and Particle Physics 9 (2006): 837-851.
11. Joos HD Zeh., et al. "Decoherence and the appearance of a classical world in quantum theory”. Springer Science and Business Media (2013).
12. Singh S., et al. "Plasma-based radar cross section reduction”. in Plasma-based Radar Cross Section Reduction: Springer (2016): 1-46.
13. Stewart J. "Angular momentum of the electromagnetic field: the plane wave paradox resolved”. European Journal of Physics4 (2005): 635.
14. An introduction to the calculus of variations. Courier Corporation (1987).
15. E Raicher., et al. "A novel solution to the Klein–Gordon equation in the presence of a strong rotating electric field”. Physics Letters B 750 (2015): 76-81.
16. Heisenberg W and Bond B. “Physics and philosophy: the revolution in modern science”. St. Leonards, Australia: Allen and Unwin (1959).
17. Gachet D., et al. "Revisiting the Young’s double slit experiment for background-free nonlinear Raman spectroscopy and microscopy”. Physical Review Letters21 (2010): 213905.
18. H Shirley. "Solution of the Schrödinger equation with a Hamiltonian periodic in time”. Physical Review4B (1965): B979.
19. Marrucci C Manzo and D Paparo. "Optical spin-to-orbital angular momentum conversion in inhomogeneous anisotropic media”. Physical Review Letters16 (2006): 163905.
20. Lüdge K and Finley JJ. "Long-term mutual phase locking of picosecond pulse pairs generated by a semiconductor nanowire laser". Nature Communications 8 (2017): 15521.EH Allen.
21. M Forrester and F Kusmartsev. "The nano-mechanics and magnetic properties of high moment synthetic antiferromagnetic particles". Physica Status Solidi A4 (2014): 884–889.
22. Kimichika Fukushima. “Electronic Structure Calculations for Anti-ferromagnetism of Cuprates Using SIWB Method for Anions in DV and a Density Functional Theory Confirming from Finite Element Method”. Advances in Quantum Chemistry 70 (2015): 1-29.
23. MJ Brandsema., et al. "Design considerations for quantum radar implementation”. in Proc. of SPIE 9077 (2014): 90770T-1.
24. S Haroche. "Cavity Quantum Electrodynamics: a review of Rydberg atom-microwave experiments on entanglement and decoherence”. in AIP Conference Proceedings 464.1 (1999): 45-66.
25. Panarella E. "Heisenberg uncertainty principle”. in Annales de la Fondation Louis de Broglie 12.2 (1987): 165-193.
26. Bozorth Richard M. Ferromagnetism, first published 1951, reprinted 1993 by IEEE Press, New York as a "Classic Reissue”.
Citation: Metin Celalettin and Horace King. “An Electron-based Paediatric Pulmonary Magnetic Resonance Imaging Device to Avoid Administering General Anaesthesia to Paediatric Patients while being Imaged by Exploiting the ‘Celalettin Tunnel Conjecture’". Acta Scientific Paediatrics 3.8 (2020): 56-61.
Indexed In
News and Events
• Certification for Review
• Submission Timeline for Upcoming Issue
The last date for submission of articles for regular Issues is November 10, 2020.
• Publication Certificate
• Best Article of the Issue
• Welcoming Article Submission
• Contact US |
1fcae5b6535ce6ba | Shielding of Electrons
in Atoms from H (Z=1) to Lw (Z=103)
When we study the binding energies of electrons in the atoms, we note that each electron is different. The nuclear charge around which it is in orbit, the nature of its orbit and the orbits of all the electrons, create a unique environment which is a sum of many electrostatic interactions, which change continually from instant to instant as the electrons perform their little dance around and through the nucleus. The average effect of all of these forces conspire to create a stable environment so that physical properties such as energy and angular momentum are conserved in the atom. The actual motions are far to complex to be adequately described (if we could describe them at all) and also be useful in helping us to understand the atom. On the other hand, these conserved properties are the best means at our disposal for aiding our understanding.
The chemical and physical nature of the atom is inherently bound up in the energy and angular momentum of its constitutent electrons. It is this which makes H2O so different from H2S. The concept of shielding was introduced early on as a means to explain the changes in binding energy and it has become a useful pedagodgical tool for teaching and explaining the periodic table.
When several electrons swirl around a positively charged nucleus, they will not only experience the attractive Coulomb potential of that nucleus but also the repulsive Coulomb potential with each other. If we consider a single electron ina particular orbit about a nucleus with a positive charge of Z, we can exactly solve the problem at both the non-relativistic (Schrödinger equation) and the relativistic (Dirac equation) level. But as soon as an additional electron appears in another orbit around the same nucleus, the repulsive force of this new electron will reduce the net attractive force of the nucleus upon the first electron. The extent of this reduction may be large or small depending upon the relative position of the two orbitals. For instance, if the first electron only travels in regions very close to the nucleus while the second is only very far from it, the decrease in the attractive potential may be quite small. However, if the opposite positions are maintained, then the decrease in charge may be almost exactly that equal to the charge of the shielding electron. When many electrons are present, they all contribute in their own, unique way to the shielding of each other from the attractive force of the nucleus. Each will be less strongly bound to the nucleus because of this shielding. The extent to which this attractive force is diminished is a measure of the change in the physical (i.e. spectroscopy) and chemical (i.e. reactivity) properties of the atom.
All the motions which contribute to the shielding of a given electron are far to complicated to measure (and cannot even be measured because of the Heisenberg Uncertainty principle) but their average effect is manifested directly in the change of binding energy. Based on this, we have taken all of the known electron binding energies and have compared them with the exact Dirac (relativistic solution) for an electron in the same quantum state for a single electron atom of the same charge. By using an iterative solution, we have calculated the shielding that must be present for each electron to give rise to the observed binding energy. The following links go to various tables and graphs which record these shielding constants. With this information, you can readily explain the changes in chemistry that are observed as you go down a group or across a row in the periodic table.
Atomic Shielding Constants
Hydrogen and the Alkali MetalsGroup 1: H to FrGroup 1: H to Fr
The Alkaline Earth MetalsGroup 2: Be to RaGroup 2: Be to Ra
Transition ElementsGroup 3: Sc, Y, La, AcGroup 3: Sc, Y, La, Ac
Transition ElementsGroup 4: Ti, Zr, HfGroup 4: Ti, Zr, Hf
Transition ElementsGroup 5: V, Nb, TaGroup 5: V, Nb, Ta
Transition ElementsGroup 6: Cr, Mo, WGroup 6: Cr, Mo, W
Transition ElementsGroup 7: Mn, Tc, ReGroup 7: Mn, Tc, Re
Transition ElementsGroup 8: Fe, Ru, OsGroup 8: Fe, Ru, Os
Transition ElementsGroup 9: Co, Rh, IrGroup 9: Co, Rh, Ir
Transition ElementsGroup 10: Ni, Pd, PtGroup 10: Ni, Pd, Pt
Transition ElementsGroup 11: Cu, Ag, AuGroup 11: Cu, Ag, Au
Transition ElementsGroup 12: Zn, Cd, HgGroup 12: Zn, Cd, Hg
Boron FamilyGroup 13: B to TlGroup 13: B to Tl
Carbon FamilyGroup 14: C to PbGroup 14: C to Pb
Nitrogen FamilyGroup 15: N to BiGroup 15: N to Bi
Oxygen Family
The Chalcogenides
Group 16: O to PoGroup 16: O to Po
The HalogensGroup 17: F to AtGroup 17: F to At
The Noble or Inert GasesGroup 18: He to RnGroup 18: He to Rn
The LanthanidesCe to LuCe to Lu
The ActinidesTh to LwTh to Lw
More Graphs of Electron Shielding Constants
All ElectronsAll n=1 ElectronsAll n=2 ElectronsAll n=3 Electrons
All n=4 ElectronsAll n=5 ElectronsAll n=6 ElectronsAll n=7 Electrons
All s ElectronsAll p ElectronsAll d ElectronsAll f Electrons
Author: Dan Thomas Email: <>
Last Updated: Sun, Feb 9, 1997 |
5e5d900e14336de2 | Quantum operation
From Wikipedia, the free encyclopedia
Jump to: navigation, search
In quantum mechanics, a quantum operation (also known as quantum dynamical map or quantum process) is a mathematical formalism used to describe a broad class of transformations that a quantum mechanical system can undergo. This was first discussed as a general stochastic transformation for a density matrix by George Sudarshan.[1] The quantum operation formalism describes not only unitary time evolution or symmetry transformations of isolated systems, but also the effects of measurement and transient interactions with an environment. In the context of quantum computation, a quantum operation is called a quantum channel.
Note that some authors use the term "quantum operation" to refer specifically to completely positive (CP) and non-trace-increasing maps on the space of density matricies, and the term "quantum channel" to refer to the subset of those that are strictly trace-preserving.[2]
Quantum operations are formulated in terms of the density operator description of a quantum mechanical system. Rigorously, a quantum operation is a linear, completely positive map from the set of density operators into itself.
Some quantum processes cannot be captured within the quantum operation formalism;[3] in principle, the density matrix of a quantum system can undergo completely arbitrary time evolution. Quantum operations are generalized by quantum instruments, which capture the classical information obtained during measurements, in addition to the quantum information.
The Schrödinger picture provides a satisfactory account of time evolution of state for a quantum mechanical system under certain assumptions. These assumptions include
• The system is non-relativistic
• The system is isolated.
The Schrödinger picture for time evolution has several mathematically equivalent formulations. One such formulation expresses the time rate of change of the state via the Schrödinger equation. A more suitable formulation for this exposition is expressed as follows:
The effect of the passage of t units of time on the state of an isolated system S is given by a unitary operator Ut on the Hilbert space H associated to S.
This means that if the system is in a state corresponding to vH at an instant of time s, then the state after t units of time will be Ut v. For relativistic systems, there is no universal time parameter, but we can still formulate the effect of certain reversible transformations on the quantum mechanical system. For instance, state transformations relating observers in different frames of reference are given by unitary transformations. In any case, these state transformations carry pure states into pure states; this is often formulated by saying that in this idealized framework, there is no decoherence.
For interacting (or open) systems, such as those undergoing measurement, the situation is entirely different. To begin with, the state changes experienced by such systems cannot be accounted for exclusively by a transformation on the set of pure states (that is, those associated to vectors of norm 1 in H). After such an interaction, a system in a pure state φ may no longer be in the pure state φ. In general it will be in a statistical mix of a sequence of pure states φ1,..., φk with respective probabilities λ1,..., λk. The transition from a pure state to a mixed state is known as decoherence.
Numerous mathematical formalisms have been established to handle the case of an interacting system. The quantum operation formalism emerged around 1983 from work of Karl Kraus, who relied on the earlier mathematical work of Man-Duen Choi. It has the advantage that it expresses operations such as measurement as a mapping from density states to density states. In particular, the effect of quantum operations stays within the set of density states.
Recall that a density operator is a non-negative operator on a Hilbert space with unit trace.
Mathematically, a quantum operation is a linear map Φ between spaces of trace class operators on Hilbert spaces H and G such that
• If S is a density operator, Tr(Φ(S)) ≤ 1.
• Φ is completely positive, that is for any natural number n, and any square matrix of size n whose entries are trace-class operators
\begin{bmatrix} S_{11} & \cdots & S_{1 n}\\ \vdots & \ddots & \vdots \\ S_{n 1} & \cdots & S_{n n}\end{bmatrix}
and which is non-negative, then
\begin{bmatrix} \Phi(S_{11}) & \cdots & \Phi(S_{1 n})\\ \vdots & \ddots & \vdots \\ \Phi(S_{n 1}) & \cdots & \Phi(S_{n n})\end{bmatrix}
is also non-negative. In other words, Φ is completely positive if \Phi \otimes I_n is positive for all n, where I_n denotes the identity map on the C*-algebra of n \times n matrices.
Note that, by the first condition, quantum operations may not preserve the normalization property of statistical ensembles. In probabilistic terms, quantum operations may be sub-Markovian. In order that a quantum operation preserve the set of density matrices, we need the additional assumption that it is trace-preserving.
In the context of quantum information, the quantum operations defined here, i.e. completely positive maps that do not increase the trace, are also called quantum channels or stochastic maps. The formulation here is confined to channels between quantum states; however, it can be extended to include classical states as well, therefore allowing quantum and classical information to be handled simultaneously.
Kraus operators[edit]
Kraus' theorem characterizes maps that model quantum operations between density operators of quantum state:
Theorem.[4] Let H and G be Hilbert spaces of dimension n and m respectively, and Φ be a quantum operation taking the density matrices acting on H to those acting on G. Then there are matrices
\{ B_i \}_{1 \leq i \leq nm}
mapping G to H, such that
\Phi(S) = \sum_i B^*_i S B_i.
Conversely, any map Φ of this form is a quantum operation provided
\sum_i B_i B^*_i \leq 1.
The matrices \{ B_i \} are called Kraus operators. (Sometimes they are known as noise operators or error operators, especially in the context quantum information processing where the quantum operation represents the noisy, error-producing effects of the environment.) The Stinespring factorization theorem extends the above result to arbitrary separable Hilbert spaces H and G. There, S is replaced by a trace class operator and \{ B_i \} by a sequence of bounded operators.
Unitary equivalence[edit]
Kraus matrices are not uniquely determined by the quantum operation Φ in general. For example, different Cholesky factorizations of the Choi matrix might give different sets of Kraus operators. The following theorem states that all systems of Kraus matrices which represent the same quantum operation are related by a unitary transformation:
Theorem. Let Φ be a (not necessarily trace preserving) quantum operation on a finite-dimensional Hilbert space H with two representing sequences of Kraus matrices {Bi}i≤ N and {Ci}i≤ N . Then there is a unitary operator matrix \; (u_{ij})_{ij} such that
C_i = \sum_{j} u_{ij} B_j . \quad
In the infinite-dimensional case, this generalizes to a relationship between two minimal Stinespring representations.
It is a consequence of Stinespring's theorem that all quantum operations can be implemented via unitary evolution after coupling a suitable ancilla to the original system.
These results can be also derived from Choi's theorem on completely positive maps, characterizing a completely positive finite-dimensional map by a unique Hermitian-positive density operator (Choi matrix) with respect to the trace. Among all possible Kraus representations of a given channel, there exists a canonical form distinguished by the orthogonality relation of Kraus operators, {\rm Tr} A^{\dagger}_i A_j \sim \delta_{ij} . Such a canonical set of orthogonal Kraus operators can be obtained by diagonalising the corresponding Choi matrix and reshaping its eigenvectors into square matrices.
There also exists an infinite-dimensional algebraic generalization of Choi's theorem, known as 'Belavkin's Radon-Nikodym theorem for completely positive maps', which defines a density operator as a "Radon-Nikodym derivative" of a quantum channel with respect to a dominating completely positive map (reference channel). It is used for defining the relative fidelities and mutual informations for quantum channels.
For a non-relativistic quantum mechanical system, its time evolution is described by a one-parameter group of automorphisms {αt}t of Q. This can be narrowed to unitary transformations: under certain weak technical conditions (see the article on quantum logic and the Varadarajan reference), there is a strongly continuous one-parameter group {Ut}t of unitary transformations of the underlying Hilbert space such that the elements E of Q evolve according to the formula
\alpha_t(E) = U^*_t E U_t.
The system time evolution can also be regarded dually as time evolution of the statistical state space. The evolution of the statistical state is given by a family of operators {βt}t such that
\operatorname{Tr}(\beta_t(S) E) = \operatorname{Tr}(S \alpha_{-t}(E)) = \operatorname{Tr}(S U _t E U^*_t )=\operatorname{Tr}( U^*_t S U _t E ).
Clearly, for each value of t, SU*t S Ut is a quantum operation. Moreover, this operation is reversible.
This can be easily generalized: If G is a connected Lie group of symmetries of Q satisfying the same weak continuity conditions, then the action of any element g of G is given by a unitary operator U:
g \cdot E = U_g E U_g^*. \quad
This mapping gUg is known as a projective representation of G. The mappings SU*g S Ug are reversible quantum operations.
Quantum measurement[edit]
Quantum operations can be used to describe the process of quantum measurement. The presentation below describes measurement in terms of self-adjoint projections on a separable complex Hilbert space H, that is, in terms of a PVM (Projection-valued_measure). In the general case, measurements can be made using non-orthogonal operators, via the notions of POVM. The non-orthogonal case is interesting, as it can improve the overall efficiency of the quantum instrument.
Binary measurements[edit]
Quantum systems may be measured by applying a series of yes–no questions. This set of questions can be understood to be chosen from an orthocomplemented lattice Q of propositions in quantum logic. The lattice is equivalent to the space of self-adjoint projections on a separable complex Hilbert space H.
Consider a system in some state S, with the goal of determining whether it has some property E, where E is an element of the lattice of quantum yes-no questions. Measurement, in this context, means submitting the system to some procedure to determine whether the state satisfies the property. The reference to system state, in this discussion, can be given an operational meaning by considering a statistical ensemble of systems. Each measurement yields some definite value 0 or 1; moreover application of the measurement process to the ensemble results in a predictable change of the statistical state. This transformation of the statistical state is given by the quantum operation
S \mapsto E S E + (I - E) S (I - E).
Here E can be understood to be a projection operator.
General case[edit]
In the general case, measurements are made on observables taking on more than two values.
When an observable A has a pure point spectrum, it can be written in terms of an orthonormal basis of eigenvectors. That is, A has a spectral decomposition
A = \sum_\lambda \lambda \operatorname{E}_A(\lambda)
where EA(λ) is a family of pairwise orthogonal projections, each onto the respective eigenspace of A associated with the measurement value λ.
Measurement of the observable A yields an eigenvalue of A. Repeated measurements, made on a statistical ensemble S of systems, results in a probability distribution over the eigenvalue spectrum of A. It is a discrete probability distribution, and is given by
\operatorname{Pr}(\lambda) = \operatorname{Tr}(S \operatorname{E}_A(\lambda)).
Measurement of the statisical state S is given by the map
S \mapsto \sum_\lambda \operatorname{E}_A(\lambda) S \operatorname{E}_A(\lambda)\ .
That is, immediately after measurement, the statistical state is a classical distribution over the eigenspaces associated with the possible values λ of the observable: S is a mixed state.
Non-completely positive maps[edit]
Shaji and Sudarshan argued in a Physics Letters A paper that, upon close examination, complete positivity is not a requirement for a good representation of open quantum evolution. Their calculations show that, when starting with some fixed initial correlations between the observed system and the environment, the map restricted to the system itself is not necessarily even positive. However, it is not positive only for those states that do not satisfy the assumption about the form of initial correlations. Thus, they show that to get a full understanding of quantum evolution, non completely-positive maps should be considered as well.[3][5]
See also[edit]
1. ^ "Stochastic Dynamics of Quantum-Mechanical Systems" .
2. ^ C. Weedbrook at al., "Gaussian quantum information", Rev. Mod. Phys. 84, 621 (2012).
3. ^ a b Philip Pechukas, "Reduced Dynamics Need Not Be Completely Positive", Phys. Rev. Lett. 73, 1060 (1994).
4. ^ This theorem is proved in the Nielsen and Chuang reference, Theorems 8.1 and 8.3.
5. ^ Anil Shaji and E.C.G. Sudarshan "Who's Afraid of not Completely Positive Maps?", Physics Letters A Volume 341, 20 June 2005, Pages 48–54
• M. Nielsen and I. Chuang, Quantum Computation and Quantum Information, Cambridge University Press, 2000
• M. Choi, Completely Positive Linear Maps on Complex matrices, Linear Algebra and Its Applications, 285–290, 1975
• E. C. G. Sudarshan et al. Stochastic Dynamics of Quantum-Mechanical Systems, Phys. Rev. 121, 920–924, 1961.
• V. P. Belavkin, P. Staszewski, Radon–Nikodym Theorem for Completely Positive Maps, Reports on Mathematical Physics, v.24, No 1, 49–55, 1986.
• K. Kraus, States, Effects and Operations: Fundamental Notions of Quantum Theory, Springer Verlag 1983
• W. F. Stinespring, Positive Functions on C*-algebras, Proceedings of the American Mathematical Society, 211–216, 1955
• V. Varadarajan, The Geometry of Quantum Mechanics vols 1 and 2, Springer-Verlag 1985 |
b8c6b0d4380964d0 |
Sign up
Here's how it works:
1. Anybody can ask a question
2. Anybody can answer
In many areas of mathematics (PDE, Algebra, combinatorics, geometry) when we have difficulty in coming with a solution to a problem we consider various notions of "generalized solutions". (There are also other reasons to generalize the notion of a solution in various contexts.)
I would like to collect a list of "generalized solutions" concepts in various areas of mathematics, hoping that looking at these various concepts side-by side can be useful and interesting.
Let me demonstrate what I mean by an example from graph theory: A perfect matching in a graph is a set of disjoint edges such that every vertex is included in precisely one edge. A fractional perfect matching is an assignment of non negative weights to the edges so that for every vertex, the sum of weights is 1. In combinatorics, moving from a notion described by a 0-1 solution for a linear programming problem to the solution over the reals is called LP relaxation of a problem and it is quite important in various contexts.
(There are, of course, useful papers or other resources on generalized solutions in specific areas. It will be useful to have links to those but not as a substitute for actual answers with some details.)
share|cite|improve this question
Actually the scope of the answers is much larger than what I thought! (But I cannot formally define what was the more restricted scope I had in mind). – Gil Kalai Oct 8 '10 at 21:16
16 Answers 16
Partial Differential Equations (PDE) is a topic where generalizing the notion of solutions is a daily activity.
The most obvious generalization has been the notion of weak solutions, which means that a solution $u$ is not necessarily differentiable enough times for the derivatives involved in the equation to make sense; but an integration against test functions, followed by an integration by parts, cures the problem. The most known example is that of the Laplace equation $$\Delta u=f\qquad\hbox{over }\Omega,$$ where it is enough for $u$ to have locally integrable first-order derivatives, by rewriting the equation as a variational formulation (Dirichlet principle) $$\int_\Omega \nabla u\cdot\nabla vdx=-\int_\Omega fvdx$$ for every $v\in{\mathcal C}^1_c(\Omega)$ (subscript $c$ means compact support).
What is important in this process is to satisfy the rule
If $u$ has enough derivatives that the equation makes sense pointwise, then it is a weak solution if and only if it is a classical solution.
Let us mention in passing that in order to use the full strength of functional analysis and operator theory, this weak notion of solutions led to the birth of Sobolev spaces and Distribution theory (L. Schwartz).
This framework has been used for nonlinear equations and systems too, for instance for the Navier-Stokes, Euler, Schrödinger equations, ... An important question is whether this framework is accurate or not. By accurate, we mean that boundary and/or initial data yield a unique solution, which depends continuously on the data. This is the question of well-posedness. In many cases, functional analysis, sometimes associated to topological arguments, yield an existence theorem. A celebrated one is J. Leray's existence result to the Navier-Stokes equation of an incompressible fluid. However, uniqueness is often an other matter, a difficult one. For a $3$-dimensional fluid, the uniqueness to Navier-Stokes is a $1$M US Dollar open question.
Uniqueness is often (but not always) associated to regularity. In many situations, there are weak-strong uniqueness result, which state that if a classical, or a regular enough solution exists, then there does not exist any other weak solution (say in a class where we do have an existence result). It is an if-theorem, in the absence of an existence result of strong solutions. For elliptic and parabolic equations, the regularity theory is a topic of its own.
Whereas regularity is often expected in elliptic or parabolic equations and systems, it is not for hyperbolic ones, because we know that singularities do propagate, and that they can even be created in finite time thanks to nonlinear effects. Then the notion of weak solutions becomes meaningful, in that it translates in mathematical terms the physical notion of conserved quantities. It gives algebraic relation for the jump of the solution and its derivatives across discontinuities (Rankine-Hugoniot relations).
Finally, I like a lot the way the theory of nonlinear elliptic equations, and of Hamilton-Jacobi equations have develloped in the past decades. At the beginning, it was observed that the maximum principle, known for classical solutions, remains valid for weak ones. This suggested, when the nonlinearity is so strong that a variational formulation is not available, that the maximum principle itself be used to define a notion of viscosity solution. The idea is to test at $x_0$ the PDE with a test function $\phi$ being comparable to $u$ (either $\phi\le u$ or $\phi\ge u$ locally) and touching $u$ at $x_0$. This has been extremely powerfull.
share|cite|improve this answer
In the third to last paragraph, you wrote: "Uniqueness is often (but not always) associated to uniqueness". Is that intentional? Perhaps one of the two uniquenesses ought to be regularity? – Willie Wong Oct 7 '10 at 17:23
Of course ! Thank you for careful reading. I correct immediately. – Denis Serre Oct 8 '10 at 6:38
Formal solutions to partial differential relations
Given a partial differential relation, that is, a subset $\mathcal{R} \subset J^k(\mathbb{R}^n, \mathbb{R}^m)$ of the space of $k$-jets of smooth maps $\mathbb{R}^n \to \mathbb{R}^m$, one can consider the space of smooth (say) maps $f$ from an $n$-manifold $N$ to an $m$-manifold $M$ such that $J^k(f) \in \mathcal{R}$, i.e. so that the $k$-jet of the function lies in the subspace $\mathcal{R}$ at each point. Call the space of such maps $\mathrm{Sol}_\mathcal{R}(N, M)$.
On the other hand, we can consider the bundle $J^k(N, M) \to N$ of $k$-jets of maps from $N$ to $M$, and the associated subbundle $\mathcal{R}(N, M) \to N$, and call the space of sections of this last bundle $\mathrm{FSol}_\mathcal{R}(N,M)$, the space of formal solutions. This space is far easier to analyse, for example because constructing sections of a bundle is a purely homotopy-theoretic problem.
Taking derivatives gives a comparison map $$\mathrm{Sol}_\mathcal{R}(N, M) \to \mathrm{FSol}_\mathcal{R}(N, M).$$ If $\mathcal{R}$ is open in $J^k(\mathbb{R}^n, \mathbb{R}^m)$ and the manifold $N$ is open, Gromov showed that the comparison map is a homotopy equivalence. In particular, if the space of formal solutions is non-empty, so is the space of actual solutions.
share|cite|improve this answer
Moduli problem: find a good parametrization of geometric objects of some type; parametrization should form a collection equipped with some natural geometric structure, therefore being a geometric object in its own right. While naive "parameter space" is a set, in structured formulation it is replaced by a moduli space which classifies the geometric objects we started with. In the simplest case, the moduli problem is representable by a space in a usual sense, an object in more or less the same category in which the original geometric object was. For example a manifold or a scheme where the original objects were manifolds or schemes. With harder problems the moduli lead to more and more general kinds of objects. This motivated new types of spaces as stacks, higher stacks, derived stacks and so on.
It appears that starting with original geometric category, most of the generalized objects needed to solve the moduli problem live in some nice geometric subcategory (e.g. algebraic stacks) of the category of (possibly categorified) presheaves or sheaves on the original category, including higher versions like simplicial presheaves and so on. The original category embeds by the corresponding version of Yoneda embedding into the category of (pre)sheaves. The new ambient category of presheaves not only more generically has a solution to the moduli problem, but also has many other improved natural properties like closedness under limits.
Cohomology theories, various generalized cocycles and so on, generalized smoothness notions and so on, can also be accomodated after Yoneda embedding into a homotopy correct version of presheaf category, like in the emerging subject of derived geometry. In the original terms of non-generalized spaces, one would need to use all kinds of difficult and dirty technique to define and study the generalized notions, for example introducing various piecewise-continuous cocycles, multivalued or infinite-dimensional models and so on. Methods depending on Yoneda philosophy give rather universal setting to attack moduli problems and many other problems (like deformation theory), allowing to often eliminate construction of very elaborate but ad hoc modifications of original concepts. Inside the bigger category it may be easier to cut out some nice geometric subcategory of geometric spaces which include the solutions to the moduli problem than constructing some similar category in terms of original geometry. Of course, sometimes the difficult elementary models have their own specific strengths, which do not follow from the application of general methods.
share|cite|improve this answer
Affine schemes:
Given any ring $R$, try to find a map from it into local ring $L$ which is initial among maps to local rings (i.e. any other map from $R$ into a local ring should factor through this one, followed by a map of local rings, i.e. one such that the preimage of the maximal ideal is the maximal ideal). Such a thing does not exist, unless $R$ is already local.
But if we allow $L$ to be a ring object living in a different topos than that of sets, then it exists: It is the local ring object living in $Sh(Spec R)$ given by the structure sheaf $\mathcal{O}_{Spec R}$ (see also my post here)
share|cite|improve this answer
Given a set of polynomial diophantine equations, it is useful to study solutions in any ring, instead of just studying integer solutions. (This is the "functorial point of view" of a scheme over $\mathbb Z$.)
share|cite|improve this answer
Well, then I'll start with the most obvious generalized solutions:
• weak solutions to PDEs
• Schwartz's generalized Functions aka Distributions,
• Colombeau's algebra(s) of generalized functions and
• various other kinds of generalized functions
• Quasi-Minima in functional analysis: A quasi-minimum of a functional $\mathcal{F}$ is a $u$ such that $\mathcal{F}u\leq Q\mathcal{F}v$ for all $v$ (with some constant $Q\geq 1$)
• Every solution of an polynomial equation within $\mathbb{C}$ can be a generalized solution if you're problem is something that has only real (maybe some geometric problem) or only integer or even only natural (maybe something from number theory) solutions. But considering all complex solution to your particular equation often gives a very elegant treatment of the problem.
share|cite|improve this answer
Ideals in rings of integers of number fields arose as "ideal numbers"...
share|cite|improve this answer
Grothendieck topologies (or: toposes as generalized spaces):
There is no topology on a general scheme which is e.g. fine enough to give back the cohomological dimensions expected from geometry, but with a more general notion of covering (or: of space) this works out.
share|cite|improve this answer
Quotient "spaces":
While quotients, e.g. of group actions, in geometry often are degenerate, several generalized notions of quotient space help here: Sheaf quotients, Orbifolds, Algebraic Spaces, Stack quotients, Homotopy quotients, Non-commutative quotients, GIT quotients, ...
(similarly with moduli spaces)
share|cite|improve this answer
Complex numbers arose as ideal solutions of polynomial equations with real coefficients, I guess.
share|cite|improve this answer
I edited another answer of yours where you wrote "Polinomial". Are you doing it intentionally or is there any other reason? Thanks – Unknown Oct 7 '10 at 16:17
No, I'm just careless :-) If you clean up those things I have no objections - thanks! – Peter Arndt Oct 8 '10 at 0:21
Generalized Eigenvector
I'm surprised no one has yet mentioned the first example an undergraduate is likely to see. Suppose $A$ is a linear map from a finite dimensional vector space $V$ to $V$, with eigenvalue $\lambda$. Any nonzero vector in $\text{ker}(A-\lambda I)^k$ for some $k\ge1$ is called a generalized eigenvector for eigenvalue $\lambda$. These are used in proving the existance of the Jordan Canonical Form.
share|cite|improve this answer
In linear algebra (linear inverse problems) one generalizes the notion of a solution of a linear operator equation $Ax=y$ to
1. "best approximation" if there is no solution, i.e. minimizing the functional $\|Ax-y\|$,
2. "Minimum-norm solution" if there is a subspace of solutions, i.e. taking that solution of $Ax=y$ which has minimal norm,
3. both (if the best approximation is not unique) leading to the Moore-Penrose inverse.
share|cite|improve this answer
Virtual knots
Louis Kauffman generalized the knots by introducing virtual crossings, virtual knots and virtual Reidemeister moves. He obtained some interesting developments in knot theory.
share|cite|improve this answer
One of the most fruitful notion of generalized solution in optimization and combinatorics is linear programming relaxation. Quoting from the wikipedia article: In mathematics, the linear programming relaxation of a 0-1 integer program is the problem that arises by replacing the constraint that each variable must be 0 or 1 by a weaker constraint, that each variable belong to the interval [0,1].
share|cite|improve this answer
A form of "generalized solution" which I saw in various areas like for combinatorial optimization problems, for diophanine equations, for computational complexity purposes, and others is "statistical physics relaxation". You regard your original problem as a "temperature 0" case of a more general problem and try to gain insight on the original problem based on statistical-physics insights for the generalized problem. I am not sure what is the general recipe for this apprach and I will be happy to see an edited version with further explanation and links.
share|cite|improve this answer
I'll mention a recent paper by Baez and Stay on 'Algorithmic thermodynamics', which contains results about randomness and complexity, depending on a temperature parameter, in which, to quote, "the randomness described by Chaitin and Tadaki then arises as the infinite-temperature limit." – David Roberts Dec 6 '10 at 12:23
I think that such an example is the use of the Residue theorem to calculate contour integrals. You use complex analysis to solve a problem in real analysis.
share|cite|improve this answer
Your Answer
|
4013bb2418268339 | DIY: Simulate Atoms and Molecules...
1. I would like to see how far I can get writing some novel & simplified simulator code for molecular modeling based on QM.
------ Background follows ---- skip to next post to get to the details of the project & starter QM question..
My son took chemistry this last semester (High-school level) and to help him out I wanted to find a free molecular modeler tool that would allow him to visualize VSEPR like images for molecules (PI/Sigma bonds), and also for myself; I was severely disappointed. As I have have some QM work that I may later publish (free) I don't want to enter into any kind of ip contract for such a program. eg: most molecular modelers are either 1. $1K> / seat or 2. proprietary, and want signed papers ... although the methods and implementations have been around for nearly as long as computers.
This became acute a few weeks ago when I succeeded in doing an electrolytic process that I previously though impossible because water would be decomposed; but it worked in spite of that. I desperately wanted to understand this chemical process although I am not a chemist but an EE. (Chem was my weak point in college). After a fairly diligent search, I discovered that the only program which was totally free in a sense usable to me was Ghemical, built on top of Jmol as a visualizer.
Sad to say the code is extremely buggy in places; The Java version never got beyond the splash screen, and the C/fortan code version could not completely compile -- especially in the area of orbital interaction based on old Fortran code, so that some features were disabled in my version.
I still had the crude "optimize geometry" function which I had hoped was close to VSEPR (Vander Waal's outlines...) and that would have been good enough -- but even that has serious bugs; Eg: when I built a test crystal that I knew what its bonds were the optimization turned it into a 2D item with atoms that would practically have to undergo nuclear fusion to be in the places they were optimized to...
I am not adverse to fixing code -- I have done quite a bit of it including kernel drivers, but I would rather spend my time on something other than old algorithms such as Hartree Fock, slater determinants, and all the well known problems of these variational methods which make my eyes glaze over.
Nobody yet has a flawless simulator....
So I would like to write my own code, both for my son and I -- but do not understand QM sufficiently to complete the project, and would like some feedback on problems I see and perhaps a heads up on any issues where I show myself to be blatantly ignorant of something. (As a BSEE I solve solid state QM problems regularly, but molecular modeling is a bit different ... )
I have figured out how to numerically simulate a field, such as E-M field, in ways that are reasonably accurate and efficient (sparse matrix). I wrote an electronics simulator using these techniques and am able to simulate circuit functions and nonlinear devices with extreme accuracy. I know this is sufficient as a foundation for actually building virtual "space" for watching EM in the classical Maxwell sense from discreetly located electrons and protons. So, I would like to experiment on the next possible step -- replacing Hartree Fock approximations, Slater determinants, etc, with the pilot wave interpretation of the Schrodinger equation and a statistical sampling of a (perhaps modified) Schrodinger equation.
As inspiration for trying this, I came across an old textbook (1950's) showing all of the QM orbitals S,P,D,F, modeled *correctly* using a mechanical spinning top -- and the accuracy was surprising to me -- so I do believe that a semi-classical simulation can produce fairly accurate results.
2. jcsd
3. My first goal is a simple electron simulation, and perhaps atomic simulation of Hydrogen and Helium.
In particular, I would like to be able to simulate effects of the Bohr Magneton, orbital transitions, and the Stern Gerlach experiment; which ought to work for hydrogen, but fail for Helium.
For now, I am looking at the Bohr magneton as an effect proceeding from a point (or ring), affecting the EM field -- and am simply using the definition of energy in a dipole magnet to simulate the electron spin's effect using a quasi ZPE technique, and using the pilot wave idea with QM/Schrodinger's to decide where the electron may statistically go.
I know how to use Boltzmann eqn. to determine the probability distribution among orbitals;
But I am stumped on orbital transitions -- since I am using a dynamic time based simulation.
I also hope that the speed of light effects also included in my EM field may yield results which are consistent with relativity ... but I'll settle for classical if not.
What I envision is that in the Schrodinger Eqn, individual "orbital" states can be computed given the immediately surrounding EM field. Ignoring for discussion, but not simulation, the motion of the proton -- thus -- the electron can be said to be in a state who's probability distribution is known (eg: at statistical sample time t) for the EM field given, regardless of how many particles created that field.
The self field of the electron/proton/etc being easily removed for that calculation.
One can in temporary memory, vary the total energy of an electron in say the 1s state to the inter-orbital state 1/2(1s + 2s) to determine when the EM field would permit the electron to change to the 2S state (although no change in electron position would be recorded). I intend to experiment with various algorithms for conserving energy... and am not concerned about that yet.
I know from semiconductor physics how to compute the probability of an item being in various states; but I have no idea what the mechanism/mathematics of spontaneous emission ought to look like.
I haven't found any mathematics which I understand and which explain how to estimate/compute the time an electron will stay in an excited state -- and what dependencies that has on the EM field around it.
In reality this may not be a problem, as I expect the EM field will always be changing and may simply naturally fall to lower levels at the appropriate times statistically, I have no way of verifying if a repeated experiment is right on average or not without some kind of estimate.
So, I'll stop here as I have no idea how complex the answer will be to this first question, what determines the lifetime of an excited state.
4. Probably I am not helping you out with this reply, but I am also interested in developing my own C code for simulating quantum physics. Up to now I have succeeded performing classical many-body gravitation and classical Ising models, but I would like to jump to quantum realm. My wife used Gaussian software to perform chemical simulations in her PhD thesis, but I would like to build my own code from scratch.
Just when figuring out what to do in my mind, I have found an important problem. I would like to get a dynamic picture as the result of the simulation. I mean, the position of the electron at different times. But, according to quantum mechanics, getting the position implies collapsing the wave-function. That leads me to a Zeno effect, or kind of, because the simulator is constantly performing position measurements on the system. I am concerned about that, but I am not sure if my point is correct or I am missing something important.
5. Heisenburg's principle applies to actual measurements. He himself, in one document I read, admitted that once the experiment was over his principle did not by itself prevent one from determining where the measured item was in the past. (That is not to say that it is possible to always determine it in every kind of experiment.) However, the Heisenburg uncertanty principle does destroy the ability to predict future positions after a measurement is made. There are philosophical issues which I really don't have answers to. But a hand-grenade simulator does not have to be perfect, just close.
I would point you back to the physical top model I mentioned which generates the different orbitals of quantum mechanics. The device used time lapse photography to take pictures of a light which a specially designed gyrating top manipulated. In this way, the more often the top was in a certain position, the brighter that point was on the film. "Modern college physics" Fifth edition, Harvey E. White -- 1966, (C) Litton Educational Publishing, Published by: Van Nostrand Rheinhold Company, N.Y, N.Y., PP. 562. The top produced reasonably accurate pictures all the way to the 6**2F[7/2] orbital. Since the model is macroscopic, Heisenburg does not really apply. Considering the activity of the top faithfully reproduced the orbitals -- and also considering that the uncertanty at the atomic level is high enough to make the precision of individual measurements impossible -- one is left with the conclusion that even if the model is not working identical with nature -- it none the less is a good enough approximation to solve problems of interest.
I analyzed the Schrodinger equation some years ago substituting in a classical velocity interpretation for probabilty, and wanted to see how that would correlate with a periodic oscillator and other such standard fare. I especially wanted to do it because I wanted to see if Schrodinger's equation took into account correlation from repeating orbits (assuming a non-stationary one). The answer was a surprising no. The probability does not include such correlation, but appears to represent the probability of a single shot experiment through each point. It is a diffusion equation, and I don't know why it doesn't contain any useful information about possible repeating "statistical" orbits -- except that the equation somehow implies that all paths (not speeds) are equally likely. Again, this is philosophical -- not so much practical for it may be simply an artifact of the equation -- but it would seem that in the 1D particle box, it is perfectly legitimate to say that the electron is under one "hump" of probability *only* and never goes through the zero probability points (infinite speed exceeds C) -- but the odds of the electron being in any one of the humps is equally likely. The alternate interpretation, and the more knee jerk one -- is to assume the electron travels by tunneling from one probability hump to another since they are all there in the solution.
I am not familiar with Ising models, so I can't really comment on what the changes will be like.
In the end, though, the Schrodinger equation to me looks to me like a 3D version of Bohr's equation -- that is, instead of saying the nodes along a classical trajectory must be exactly a wavelength multiple -- Schrodinger's appears to say that the phase loop around *ANY* path must be 360 degrees with the local "wavelength" varying according to the EM field. Bohr's orbits being circular and therefore tracing out a constant energy had only a single value of wavelength; so Schrodinger simply generalized it and perhaps got rid of degenerate cases. (Eg: 1s does not orbit kepler like, but hovers based on the wave equation. Don't forget a Bohr magnetic spin effect is also involved in the hovering. ) I am interested to see what happens.
6. alxm
alxm 1,864
Science Advisor
VESPR doesn't involve pi and sigma bonds, that's valence-bond theory.
That doesn't nmean others don't exist. E.g. PyQuante, and quite a few DFT codes are available for download directly. Also programs like Dalton and ORCA require a license, but don't have any particularily draconian restrictions. Mostly they want to keep track of who is using their software, and make sure you cite them if you do.
QC software won't help you if you don't already know a great deal about what you're studying. You need a solid understanding of the chemistry involved before you can do calculations. (you also need a solid understanding of the methods). Whether an electrolytical reaction can occur can be determined from standard electrode potentials.
Which "well-known problems"?
And what's wrong with Hartree-Fock? It's been the basis of the majority of QC methods for the last 80 years, and there have been good reasons for it. (Nor is it an algorithm, btw, it's an physical/mathematical approximation. SCF is an algorithm) What do you mean by 'flawless simulator'? There are quantum-chemical methods such as CI and CC (which are based on Hartree-Fock) which are exact, in principle. You can hardly do better than that, as far as accuracy is concerned. (Scaling and speed is another matter)
The best recommendation I can give you is to get some textbooks and start studying QM and QC, work your way up to the current state-of-the-art in the field, and then you can start thinking about how to improve on the existing methods.
7. That's VSEPR not VESPR; You just blew it.
Looking in my Chem book and several others either ball and stick sketches or overlapping P orbitals are sketched in the VSEPR chapter. What I said still stands: VSEPR like sketches.
If you wish to quibble, I never said Sigma and Pi orbitals are officially part of VSEPR theory -- I don't believe you are denying Sigma and Pi bonds exist in molecules, and that VSEPR theory is used to sketch molecules. So it seems you are concerned over accuracy of the sketching or something, or just didn't notice things like:
And how does one sketch these force fields? Is it forbidden to use a pi LIKE bond sketch?
Or are you demanding I dot my i's and cross my t's because my statements might not fit your categories perfectly, and my kind hint that similie/analogy was meant was missed by you?
Draconian? I don't recall saying that ... So I'll spin that for fun -- I was merely indicating that I didn't want to offer temptation to anyone. The very fact that someone is worried about who is using their software and wanting credit implies that very temptation.
Do you trust me not to use your name and address against you *in any way* if I somehow am upset BY you and have these pieces of information? Do you want to give me your name and address with no strings attached to do with whatever I want? (I'm not Dracula, that's for sure.)
I'll look into it again ... but I think PyQuante was the one my friend tried and didn't work.
As far as I know, the DFT algorithm[/s] is/are probably worse than Hartree Fock implementations -- it is designed to reduce computational load not improve accuracy.
The algorithms I was speaking of are computer based, hence they couldn't have been around longer -- there was at least the time for an implementation to be coded after the invention of the computer. Do you not count the Babbage machine as a computer? It does run a crude program.
Oh, am I being unfair?
Point sort of well taken. I came up with the generation of hydrogen at the result of electrolysis -- based on the standard electrode potentials corrected for temperature and concentration even (in water).
What surprised me was a difficult to discover chemical reaction that someone else had found which made it happen in spite of this problem. In most experimental attempts I get hydrogen contaminated product. The success of this method delights me, and I would like to explore why it works; I'm not sure if it has to do with complexing, or the particular acids used -- especially because I had to substitute a chemical in the reaction that wasn't in the original experiments -- for environmental reasons.
Not only did the reaction succeed, but it went beyond even the original experiments in quality. I found that I could produce a product *and* the expected post processing from the literature was un-necessary in my formulation.
I suppose I could try to figure out what happened using a slide rule, paper, and other easily available methods -- but then I would be likely to make a mistake, and it would take a long time to redo the calculation. I think it would be nicer to have my computer do it for me.
As far as a solid understanding .... how would you know?
I rest my case.
It is an algorithm as well. Many mathematical statements are algorithms -- take Sigma notation for example. If you want to quibble over distinctions which don't make any difference... my brother was a math major, he likes to do it too, sometimes -- but less often now. Perhaps you would like to be introduced?
Oh, they are *exact* ... then either they are not solvable in many cases ... or the people who implement computer programs based on them simply can't get them to work in every single case ... or they are not free and require coding anyway. Am I mistaken?? I am not omniscient, so since I get frustrated with other people's garbage is there a problem with my attempting something new?
Historically, I can say nothing pisses me off more than spending $5000+ on a piece of software, and having to contact tech support only to have them tell me; "the soulution is -- just don't do that.".
I am speaking from experience. Somhow the spend $1000 on a gamble just isn't appealing either.
I have studied variational calculus since the 8th grade -- my original purpose in doing so was to understand Schrodinger's equation.
I have a hard time believing what you say here as anything but double talk.
If these methods worked perfectly or exactly as you seem to be selling them as, and if you really include what I desire to simulate -- in the sense that I am proposing -- then they would definitively predict what happens in say, "the Bell inequality", based experiments. Considering most of the physics community is still arguing over that, including ZPE (with known flaws) as one objection, and no definitive set of experiments has yet buried the competition ... I see no point in arguing over what is better, or what is "perfect".
I'm interested in an experimental simulation method; the one I have chosen is just different -- it is not likely perfect either -- but I would like to try it; Are you interested in actually addressing the thread questions I am proposing or something else after my clearing the air here?
If you have anything to add concerning how to compute how long an electron stays in an excited state, I'm interested. That is one of two issues which prevent me from finishing the code which I already have; as I said at the start of the thread, I'd like to see how far I get; and I mean after I know the answers to my actual questions.
Last edited: Jun 20, 2010
8. alxm
alxm 1,864
Science Advisor
-I do deny sigma and pi bonds exist in molecules. Orbitals as well. These are all abstract concepts derived from various theoretical models, not real physical things.
-Force-field methodology doesn't have anything to do with 'sketching' anything. It's a semi-classical physical model, not a single scalar or vector field or anything of the sort.
-All DFT methods in practical use for molecules are more accurate than the Hartree-Fock method. Obviously you don't know the relative size of dispersion effects if you think that would be a deal-breaker.
-Yes, QC methods were being used prior to electronic computers, in the late 20's and 30's. Newton didn't have any problems developing his algorithm for root-finding without one either.
-Computers WON'T 'do it for you'. That was the point I was trying to make. Most chemists in most situations do not use quantum chemical software at all. Most chemists don't know how to use quantum chemical software.
-You think it's reasonable to demand that they give you software that took them years to write, no strings attached? Crazy.
-The fact that a method is exact does not mean a given implementation of it is necessarily numerically stable in every case, or that code is bug free.
-The fact that a particular implementation might not work (perhaps due to user error), does not invalidate the theory behind it in the slightest.
-A quantum-chemistry program can't and won't tell you about the Bell inequality. It's an entirely different system and problem. They won't calculate scattering cross-sections or do your taxes, for that matter. Also, it's not the calculations themselves which are in dispute.
-It's "Heisenberg" not "burg" (since you seem to think spellings are of great importance)
-The Uncertainty Principle is not a simple experimental limit, it's a theoretical limit. This is explained in every textbook and in a thread in this forum once a month or so.
-You can produce _exact_ pictures of hydrogenic orbitals simply by plotting the spherical harmonics, since that's what they are. But they won't give you a good approximation of any multi-electron system.
-How to calculate lifetimes of excited states is also in every textbook. Fermi's Golden Rule, look it up.
-The Schrödinger equation does not have even a superficial resemblance to the equations of the Bohr-Sommerfeld models, in my opinion.
Beyond that, I won't address your 'independent research' since this forum has clear rules about that stuff. And that's pretty much all I have to say on the topic.
9. edguy99
edguy99 296
Gold Member
"Nobody yet has a flawless simulator...."
Although I wont dispute this, I write animation software in the timeframes/distancescale from microseconds/nanometer to show photon size and movement, attoseconds/femtometers for electron motion and Yoctosecond/yoctometers for nuclear decay for neutrons under a collection of different physical forces.
You may require several animations specifically to account for distance and time (large scale and/or small scale). Have you given any thought to distance and time scale for your simulation? Some view this type of animation as "speculative" and do not like the discussion here, but I would be happy to discuss it if you would like to send me a message or sample picture of what you have in mind.
10. Hi;
I think the photon scale and the electron are interchanged?
I was planning on using my electronics simulator base as-is for the project at first -- and modifying it later for improvements. There is a saying "one in the hand is worth two in the bush."
The electronics simulator auto-adjusts time-steps depending on interaction values -- that is, the timesteps computed become larger as the path(s) become more predictable / repeating of past history -- and smaller where a small change can cause a large instability.
The algorithm is complicated but like a video game drawing algorithm -- is carefully optimized in the inner loops; Roughly speaking, it allows me to simulate around 1000+ items in a field, and easily 100,000 nodes+ in electronic circuity on a modest single core Celeron processor (SSE2/SIMD) -- although, the computation time drops exponentially with the *randomness (non-repetitive change); and conversely accelerates with the repetition of similar events/changes. (*randomness within a steady state is not what I mean.)
The details of the algorithm are beyond the scope of Quantum Physics, obviously, and are a programming/data-structures nightmare; but it's extremely powerful and fast compared to alternative solutions used in the electronics simulator industry (eg: typically Laplace transformations).
At heart, I have opted for a compute quickly -- and use a statistical method to do quality control.
If you are familiar with industrial process control -- the method is essentially the same kind of thing.
In the electronics realm, I specify the time-step I wish to output images/plot at, and the simulator discards all intermediate states between frames to be drawn. This eliminates the highly variable time-step plot problems required for efficient & accurate calculation of events; In electronic simulation, the time-step typically varies from milliseconds down to picoseconds. A crude estimate for (super)atomic phenomena is in the range of milliseconds down to sub-atto second 0.01 atto (YUCK).
I haven't given much thought to the time-steps I will output -- outputting every subtle change doesn't make a whole lot of sense as a large number of small changes are required (typcially) to make a long term macroscopically noticed change;
I plan, for right now, in just hand setting the time-steps as I do in electronics simulation for different regions of interest. Eg: large time-steps which allow me to ignore the development of the initial state (typical state) of the system -- but still do a sanity check. And then focus the time-step down in regions where interesting things are happening that I am trying to understand better.
Even though time is involved in my simulation, I was not planning on making time evolved movies in the end. The reason for this is twofold --
1) the simulator itself determines how to focus CPU power computationally on different regions of space; This is what allows the simulation to proceed at much quicker rates than would be possible if all points were computed for the same time-step everywhere in parallel. Items of circuitry / space which are sufficiently decoupled to introduce small errors by reducing the update rate of *changes* in interaction become time-wasters each time a displaying of state occurs. Of course, blindly printing every time-step automatically chosen by the simulator is the absolute worst time waster. eg: 1e6 + times slower....
2) In electronics, the time-evolving waveforms are often important; so much so that they become unintelligible if the time-step varies, so I am forced to plot a maximum & fixed time-step rate if I wish to easily extract information about the evolution of the waveforms in time -- so that is the way my simulator works now -- and to keep the cpu time down, when the simulator has to reduce the time-step for accurate results; intermediate steps are discarded from the plot to save time. To keep the problem tractable once I get to large (65-100 atom) systems, I plan on having the simulator ultimately record changes in state and not changes in time; and I ultimately, I would like to be able to specify which states are of interest to reduce the data even more.
I do have open questions about how much information to save at each plot point; but I can't solve those in my head -- I have to actually experiment to see what is possible.
The space, is unfortunately, limited by the nature of the simulation, cache memory, main memory, etc.
I won't be able to simulate more than a few cubic microns of space -- and only a 1000-2000 or so electrons/protons/etc. within that space. (Presuming the questions I am trying to get answered do not significantly degrade the estimates.... engineering :) )
Why would some consider it "speculative", atto second laser pulses are regularly used to excite atoms into the wave packet superposition of states which mimics classical motion. That's experimentally done, and I wasn't aware that the people doing this were violating any principle of quantum physics. I know that when I first heard of the frequencies they were talking about and atto second that the uncertainty principle crossed my mind. For the purposes of simulation, though, these small times replace the differential element in calculus -- and anyone who uses calculus to work with Schrodinger's is automatically guilty of using small times as well, for a mathematical purpose if not a physical one. hmmm..... Separation of Church and state, or ought it be Calculus and state... hmmm....
I wasn't planning to get into that. And from the other response I got, I need to correct the extra words being put on my lips...
11. Alxm,
I have tried a few times to pen a response, but I simply don't post because the response is so much longer than your post. There are overlapping issues in what you are remarking about -- and clearly some confusion about what I actually said, and what I appear to have said. So rather than reply head on; I hope I am able, in this way, to emulate your very compact response style which I rather wish came naturally to me.
I use Fermi's rule for understanding stimulated emission events. If what you are saying isn't nuts (and I do admit you have an IQ) then it appears to imply that there are no such things as truly "spontaneous" emissions. If that's the case -- my simulator already takes care of the issue, and I am wasting time asking about it; If your comment is a mistake, let me know -- otherwise I need to change the question.
If it is all you have to say, then the thread won't be cluttered up with more of this. I haven't taken sides in any absolute way on controversial issues. In the last (and only other) thread I ever posted on the forums -- it took the science advisor several pages to actually come up with an answer that made any sense. I still shake my head that he could have possibly missed that using a 1024+ digit calculator means one is serious about verifying something, and not just 'estimating'. Your response gives me hope that you have more awareness than average. Beyond that, I will simply remark that the research I have cited thus far in the thread -- isn't mine. Secondly, it was searches on the physics forums and recommendations of "science advisors" and more important "physics mentors" which led me to follow the links to these controversial issues in the first place.
* up
* same
My respect for you went up a notch or stayed the same with each of these comments. I agree with them, and always did although that may not be obvious at first reading of my past comments. They do not affect why I am doing anything, though.
Gee. In one line: My "hand" is also an abstract concept built upon sigma and pi abstract comments, and I can draw my hand with my hand which does not invalidate my theory of what a hand or sigma or pi bond is -- in the slightest.
OK. Your entitled to your opinion even when it has almost nothing to do with what I said. Sommerfeld's modification is not Bohr -- and DeBroglie only interpreted, but did not change Bohr's theory.
Nor can I prove or falsify what you said -- so I'll join it: Since I don't know what is the cause of what I witnessed -- Better safe to include dispersion than sorry I didn't.
BTW: The experiment I am trying to understand has nothing to do with cold fusion or Blacklight power, etc. It is purely related to the change in economics over the last 100 years as to what processes are economically viable. and environmentally friendly.
! Then, you admit we are on par rather than you being so much more superior than I am ?! Wow. I'm flattered.
! Touche!
! LOL: Say that again with a straight face after buying $50K software to solve a problem which you have only a week to solve before your competitors take your client away; then discover that there are bugs in the software you have purchased, and after you hold hands with tech support to solve their mistake for free (which only they can actually fix in the code) -- and after they turn around and sell your fixes to your competitor and refuse to pay you a dime. (And that is what the nicer of the companies will do. Murphy was an optimist.)
I am laughing at the notion that you might think you will convince anyone that *refusing* to spend money is crazy, or that a refusal to buy is equivalent to *DEMAND*ing something for nothing ???
By the way, thank you for giving *some* of your time away free. I am thankful to Dr. Young (no the name is not a 'simple' coincidence) for selling me a college physics education, and allowing me to use the knowledge after class as public domain -- including his comments.
If one (esp you) are crazy for giving away, or even think you are -- I have the name of some very good psychiatrists with multiple degrees and understanding of other subjects. They do interview before accepting clients, and not many get accepted, but you might get accepted if you are lucky.
12. edguy99
edguy99 296
Gold Member
You could be right, I'll try again (feel free to correct):
1. units Femtosecond(10^-15) and Nanometers(10^-9) to show photon size and movement - with a viewing width of 10 micrometers and a steptime of 10 femtoseconds, a 600 nanometer photon will bounce back and forth on your screen at a comfortable rate when viewed at 30 frames per second. Electron and protons will not show any motion.
2. units Attoseconds(10^-18) and Femtometers(10^-15) for electron motion - with a viewing width of 10 angstroms and steptime of 10 attoseconds, an electron with 10 evolts of kinetic energy will move through your screen at a comfortable rate of about 2 picometers/attosecond, photons travel too fast to be seen at 300 picometers/attosecond and protons (heavier) dont really move at all over this timeframe.
3. units Yoctoseconds(10^-24) and Attometers(10^-18) for nuclear decay of neutrons - with a viewing width of 10 femtometers and steptime of 1 yoctosecond, a down quark changes to an upquark and emits an electron and an anti-electron neutrino. The w-boson would be visible for less then a yoctosecond and the neutrino would float off at a comfortable pace.
You mention 3 animations: Bohr Magneton, orbital transitions, and the Stern Gerlach experiment;
I will give some thought to time and distance scales and post later. You may well have something in mind already, if so please post. With regards to changing timescales. I dont see how this is done accurately as certainly the location of particles will not change when you change the timescale, but the momentum and momentum direction does change and I dont see how they can be recalculated without going to an n-body problem... I find it saves time to do 2 or more animations, one at the slow rate and a different one at the fast rate.
13. mmmm... much better.
At this resolution, in free space: c=3.0x10**8 m/s, the wavefront will step with a distance of 300nm / femtosecond. If the screen distance is 10μm, that means steps= 10/0.3 = 33.3 frames.
So it takes around a second to cross the screen; if a wavefront simulates in dispersive media with a slower rate of travel, this would indeed be comfortable.
#2 I will just let sit, #3 -- wow. At least I don't think I will have to worry about that time-reference...
14. edguy99
edguy99 296
Gold Member
A couple of questions and comments.
Regarding the Stern Gerlach experiment:
If you wanted to do an animation with a viewing width of 10 meters (to show the equipment), what timestep would show electrons moving across your screen at a reasonable speed? Ie. if I had a laser gun pointed over your shoulder checking the electrons speed, how fast would they be going?
Clearly the spin up electrons move one way and the spin down electrons move the other way based on the magnetic field, but I dont think they all move "exactly" the same. Is there research that shows the pattern of the distribution of electrons that show up over time or do they indeed all move the same up or down amount?
Regarding orbital transitions:
A viewing width of 10 angstroms produces the kind of picture you would see of "real atoms" like the IBM pictures with scanning electron microscopes, what you dont get is the timescale. A timescale of femtoseconds works well to view things like proton/nucleon vibrations but even low energy "unbounded" electrons are moving way to fast to see. A timescale of attoseconds allows you to build your electron probability clouds for the frame and deal with a free electron floating by that you know, because of its momentum, will be somewhere else in the next few frames. This timescale has the same problem you talked about where photons will seem to appear out of nowhere.
I would appreciate if you could expand on this comment.
15. OK. (SG henceforth in my notes.)
I intend to mimic the MIT reproduction of the experiment using hydrogen and either accepting the factor of 40 mass change, and therefore expected 40x scale change -- or artificially increasing proton mass 40x to simulate a fake "potassium" atom in transit with hydrogen. I haven't calculated the speeds, but the transit time and dimensions are essentially fixed by the experiment. If I modify them arbitrarily -- then verification of the simulation will be difficult at best. If I were to animate a cross section of the experiment -- the atom would be vanishingly small. I was thinking to simply collect the
[x,z] data points at the end of many experiments for a statistical sample equivalent to the SG photograph.
Note the MIT experiment uses a slightly different shape of electromagnet than SG. I believe SG is more of a triangular wedge near a half-circular socket. MIT's is on pp. 17, from a Google search of Lab 18, MIT and Stern Gerlach:
I'm going to answer these next to questions out of order, as I think I may be clearer that way.
I stated the idea poorly. I am less certain of the shape/distribution of the EM field as one approaches an electron's vicinity. An electron is a dynamic magnetic dipole -- for there are no monopoles of magnetic "charge", and thus the field changes around the electron as it "moves";
(Dispite wikipedia's present fictional monopole account.... which is "mathematically" equivalent, supposedly -- but whoever wrote that makes me groan.... ).
A dipole's detectable field scales as r**-3, where r is somewhat ill defined, but as one gets farther away from the source(s) the error in r from variations in dipole shape contributes less and less to the overall values. I think (but am not absolutely certain) that much of the geometry distortion of a dipole is primarily in higher terms (r**-4) etc.
The primary meaning of my comment is that there are several unknowns affecting the field of an electron which become more important as computes closer to it's location. These unknowns are not necessarily a result of the Heisenberg uncertainty principle.
In general, the idea that an electron is a physical particle capable of "spinning" on an axis is discouraged -- and although I am not certain of what experiment(s) disprove the idea altogether -- I seem to remember someone mentioning historical experiments that disprove any "radius" being assigned to an electron (either detected or not...). But if an electron is not a particle with spin, then it must still at least be a distribution of electric field and time; the latter idea is true whether or not an electron is a physical particle.
In SG, the developed photograph of the silver atoms which hit the target is quite distinct. So much so that there is little variation from electron to electron (and perhaps even less spread would be realized if the experiment was improved.)
See p. 56 for a photo of the actual silver atom pattern on the film target.
A couple of things to note about the photo.
The "ideal" silver dipole would be located at the center (x axis in the photo) for one launched non-skew into the magnetic field. It is precisely here where a spike develops in the x direction -- that spike is generally reasoned to have something to do with the non-symmetry of the pole faces on the magnet, although no explanation I have read is really satisfying for a mathematical analysis of SG's actual magnet is not done. The other plots side is flattened, and that is typically described as being because of the silver atoms physically hitting the rounded part of the magnet cavity.
I didn't see a plot for the MIT version of the experiment which uses a magnet shape which ought not have that spike, either. The only obvious clue pointing to the anomalous nature of the spike is that it is asymmetrical -- The x axis was oriented vertically in the actual experiment, so the spike is even against gravity if I recall correctly. But if the explanation of the other side's thin-ness is truly because of physical clipping, then there is no way to be certain the spike would not have shown up experimentally due to the field shape if the atoms had not hit the surface of the magnet. One really needs a predictable *linear* variation in magnetic field strength over space to be able to effectively correct for physical manufacturing limitations of the slits, etc, and SG doesn't really have that.
Comparing the demagnetized plate to the magnetized one, there is a fairly strong indication that the plates were sent on the postcard willy-nilly and with no definite orientation. (The mirror image reticule pattern on the right image re-enforces that notion...) The demagnetized version ought to be a purely flat line -- but isn't. In the lower half of the line a clear broadening of the pattern can be seen -- which if the magnet is truly off, can only be attributed to the shape of the slits allowing the silver out -- or the position of the silver oven behind the slits biasing the source statistically.
In any event, a careful inspection I believe I see trace widening on the upper half of the pattern in the magnetized view suggesting that the flat line slide is actually rotated 180 degrees from how it was taken in the other photo, with an optional mirroring on top of that...
Using that information as a corrective measure, and knowing that the slit is horizontal, one can estimate the amount of classical skew in trajectory a silver atom would have (incoming y angle and x angles with respect to the demagnetized photo, in contradistinction to a perfect 90 degree angle.); and since the size of the slit is likely large compared to the wavelength of the silver atom, these classical trajectories ought to be fairly accurate. Each point, then (y on the photo) has a definite angle (source dx/dz, dy/dz) from which the silver atom could have come, and the effect of this distribution needs to be calculated and compensated for to determine what an idealized simulation ought to look like.
Generally, there ought to be fewer angles of approach as one gets closer to the edges of the slit (y on the photo), although dx/dz will be fairly constant across the whole slit except where the demagnetized photo shows thickening.
So, I think the most accurate part of the magnetized photo will be the lower right quadrant of the photo taking the approximate center of symmetry of the shape as the origin.
Using a straight edge, the line widening as one gets closer to y=0, increases very linearly (eg: right most trace / lower right quadrant) until extremely close to the magnet center where the field of the magnet becomes less known / near the surface of the magnet. So, there are two (ideal) linear effects with approach toward the center -- 1) the width of the trace, and 2) the offset of the trace from the picture's y axis
The offset is explained by the experiment itself -- eg: the proportionality between magnetic field intensity gradient with x offset in the photo vaires approximately linear in SG (and linearly in the MIT version, according to the MIT text). The widening of the trace, however, is something I don't know how it theoretically ought to proceed, and which is caused by "all other differences" in the electron including any Heisenburg and QM interference effects.
If only the electron's position [delta x, delta y] were affected -- I could more easily separate out what causes the thickening -- but as there is also skew, the problem isn't solvable qualitatively in my head. The variation in trace with, then, is something I don't know if it would replicate in an idealized experiment or not; and will need to work out before verifying my simulation against SG.
If I am able to correct (reduce data) in the photo by mathematical modeling of skew, other features might become visible which presently are not. But to really do this correctly, the best source of information would be a run of the MIT experiment with computer usable data points to operate on. I'll have to look around and see if anyone has posted a good run, and if not, perhaps I will put some thought into re-constructing the experiment. I have the vacuum equipment, some very pure micro-crystalline level homogeneous Iron, and the machining equipment to re-construct the magnet, along with motorized micrometers (robotic) -- but that would take quite a bit of time and effort to put together for me in my present state .... if anyone knows of an online data source, I would appreciate it.
I'll probably post a bit more tomorrow... I am still trying to resolve a second question(/s) concerning the time/intensity distribution that an electron dipole has by focusing on what a quantized magnetic moment actually means, both classically (the limit) and Quantum Mechanically. I will need to think out loud, and perhaps will make some mistakes when doing so -- but hopefully the sharp eyes of others will be helpful there and I can get enough of an answer to my second question/(s) to model it reasonably well for simulation purposes.
We'll just have to see what works once I get there.... but I can certainly try it.
16. edguy99
edguy99 296
Gold Member
Thank you for the post and link:, this helps the discussion a lot. From the figures, it appears they shoot potassium atoms through about a .5cm channel of a 10cm long magnet and check if they hit a 4mm paladium wire that is moved around. An animation width of 1 meter sounds good but I remain stuck on the timeframe that would show on your computer screen, atoms emerging from the oven (say one per quarter second) and hitting the wire within a second or two...
The speed of the potassium atom appears to depend on the oven temperature that drives the vapor pressure that determines the speed of the atom. I have assumed here that the speed is important since a slow potassium atom spends a longer time under the magnet then a fast atom and hence would be bent more. This would seem to suggest that even a small adjustment in temperature would cause the atoms to either spread out too much and be undetectable or not spread out at all. Perhaps I miss something there or mayby the temp at 190 degrees produces a consistent enough speed that you do not see the effect. or something else..
I agree completely. There are at least 2 major problems, the size you mention - I believe radius calculations for the mass involved come out way to high (300 femtometers vs a lorentz radius of 3 femtometers) and also a problem, I think the electron can be flipped over 720 degrees and a spinning disk is only 360 degrees.
Anyway thank you for the info, and if I can nail the speed of the atoms I visualize a pretty nice first animation showing an oven blasting (with a setting for temperature) out potassium atoms at a particular speed??? and rate??? and over a timespan (microseconds???) you would see atoms hit a wire after being bent by a setable magnetic field with the proper bending of up and down electrons. The nice thing about starting the animation at this level of time and space magnification is that its pretty clear what happens at this scale and there is no need to prob the details of what the atoms actually do or look like. Probing a more detailed time or distance scale will be more difficult.
17. Without checking, that sounds correct from what I remember.
I agree.
The experiment requires the student to wait for the temperature to stabilize for a long period of time, otherwise the data will be in error -- so I am fairly sure that small variations in temperature affect speed significantly.
The divergence of the magnetic field is what causes a vertical accelerating force to appear on the magnetic dipole in the first place, and divergence is independent of speed. The traveling speed of the dipole, then, has nothing to do with vertical acceleration -- eg: the overall charge is neutral, and the + and - are both moving in the same direction so even Hall effect does not occur. However, the + and - charge would want to be deflected in opposite directions horizontally in proportion to the speed with which the atom travels through the B field, and I am sure (classically) that would cause the unpaired electron and remaining nuclear charge to align on a horizontal line with respect to each other and in a plane 90 degrees to the magnetic field direction.
There is, then, going to be a vertical force vector independent of speed, and a horizontal one which is counterbalanced by electrostatic attraction. Since acceleration is an integral of force ( on average f/m), the deflection depends on time spent in the magnetic field. Even at a constant temperature speeds will vary statistically. These variations will cause (add to) trace width spread in the photos shown; and also (small) skew angle which does the same thing. The lab sheet may contain enough information to compute the speed variation, and one can compute a percentage of average time which eg; 99.8% of atoms will fall within, and the spread of the atoms (not including skew angle) due to this variation in time will be a linear fraction of the total deflection. % time error = % y deflection in error.
You'll have to excuse my ignorance of vocabulary; as a BSEE, I was taught primarily about semiconductors which is a bulk phenomena. Different terminology is employed, perhaps somewhat simplified, because the dominant effect is not always the same as would be the case with discreet items. When you refer to a "Lorentz radius" are you speaking about a contraction in length due to motion of some kind? And regardless of that, what calculations did you do to arrive at those particular numbers?
You're welcome. And thanks for just speaking about the subject in general; in your own way you have brought up something I did not think about regarding this experiment -- and that is the fact that the scale must exceed the few cubic microns which I can simulate realistically. In order to simulate the MIT version of SG, I will have to find a way to delete nodes from one side of the space I am simulating and add those nodes to the other side so as to keep the moving atom inside a window of simulation space. (Not to mention, computing a new value for the moved nodes...)
That will, unfortunately, complicate the simulator enough to seriously increase the time it takes me to implement it as my present simulator does not have that capability. :yuck:
+1 to the to-do list.
More will be coming about the "spin" of an electron when I have a chance this evening, along with Q/Qs that I have about how to work with it -- and a short summary of what I am presently considering. (Next steps.)
18. OK. Fell asleep on vacation ... and fell asleep last night early.... and this morning again .. but I'm back.
A brief review of the ideas I need to build the simulator:
So far I have spoken about orbital transitions -- which has nothing to do with *where* an electron(/other item) is at the instant before the state change and after. Only the probability of where it is found in the future can change -- without the "actual" position changing for a single run of the simulation at the instant of state change. So, one may think of a simulated orbital change (state change) as being a change in the statistical rule about where the item will diffuse in the future.
Alexm ties "spontaneous" decay of excited state orbitals to a "disturbance" in the EM field by invoking Fermi's golden rule. This idea is new to me, for in semiconductor physics the mechanism of spontaneous decay is *NOT* explained, rather it is pragmatically considered an empirical property of the semiconductor which is modified by disturbances in the EM field (eg: in a perfect crystal, it would be empirical -- but in a doped one, the lifetime of an excited state is the empirical value *decreased* by the probability that a fermion will diffuse into a defect in the crystal causing a stimulated decay of the excited state.) Since no one has contradicted Alexm, his opinion stands as the method I will attempt to use for simulation.
The remaining issue I have to deal with is the "spin" state of an item. This state is what causes an "anomaly" in emission spectra similar to fine structure (haven't studied this, so I am speaking in general) based on the magnetic state of individual protons in the nucleus coupled to the magnetic state of the light emitting electron. The anomaly is extremely small since the Bohr magneton is affected by the mass of the item -- and protons being relatively heavy, have ~~ 1/1836 times the effect of an electron.
I have no engineering text describing the exact nature of various experiments about the Bohr magneton and guidance/input from those who might know of experiments/articles to review would be appreciated -- but I do know that spin of an electron has a mechanical moment from an experiment done by Einstein and a colleague, and I do know that the orientation of the magnetic dipole in space can change.
Since classical angular momentum is found experimentally when dealing with the Bohr magneton -- along with non-classical quantum effects -- I am simply going to write down what I think I know -- and hope for correction or at least suggestions of what I might be over-simplifying if anything.
A top (mechanical moment/dipole) spinning which has force (torque) applied to re-orient the axis of rotation will resist re-orientation by precessing about it's rotary axis. Gyroscopic formulas tend to focus on the period of precession -- and I am unaware of how one calculates how much a top will reorient it's axis of rotation with respect to time and torque; nor if such a reorientation is possible with a frictionless device....
I assume reorientation is possible even in a frictionless environment, since permanent magnetism in non-superconductor material is (as far as I know) purely a spin based phenomena and there is no friction that I am aware of for a spinning electron (there are no electrons that *DON'T* 'spin'). I suppose it is possible that quasi-Kepler motion of an electron around an atom (P orbital, etc.) might also contribute to magnetic field, but everything I was taught in chemistry indicates that unpaired electrons are the sole cause of magnetism. So I am assuming that is the case for the moment. (pun intended.)
Also, I have seen the "orientation" of spin as being what supposedly changes in the NMRI/MRI effect.
I have not seen any mathematics which explicitly show how QM-wise engineers came to predict that the magnetic field would "FLIP" when appropriate resonance radiation was applied to the electrons in hydrogen (the typical NMRI target), although the energy required being proportional to a statically applied magnetic field does make sense. The energy required to reorient the magnetic dipole being readily computable from analogous situations in DC/AC motors being used for power generation. The magnetic dipole rotor requiring work to flip orientation while in the magnetic field of the motor's stator.
I am presuming that a rule akin to Fermi's golden rule needs to be used in this case (spin flip) -- but I have not come across in commonly available literature how NMRI is deduced, nor how to handle the QM states.
Any pointers/example calculations which could be applied in the case of a non-static magnetic field would be appreciated, as large uniform magnetic fields are not what will be found in molecular simulation -- but rather weak dipole moments from various items located near the "electron" or other simulated item of interest which dynamically can re-orient.
A second issue that comes to mind, and which may/or not affect the discussion is that Ampere's law and other effects computed using calculus assume a uniform magnetic field when taking the limit as area goes to zero (eg: differential area or volumes...which have an existence whose consistency is questionable given the Heisenberg concept... ). A magnetic dipole's strength is measured in current x area enclosed by the current loop. I*area**2. However, making a large rectangular and planar loop of wire and applying a constant current to it -- and then using a compass as a probe the field's strength has convinced me that B field magnitude in the plane of the loop, and inside the loop, is stronger as one gets closer to the wire -- and weaker as one gets closer to the center of the loop.
When replacing distributed currents with individual electrons, protons, etc. The magnitude of the current is replaced with the velocity of the charged atomic item in relation to the path length required to circle an "enclosed" area. Therefore, the product of area enclosed to velocity of circulating charge (or E-field disturbance), is then constant if one is given a constant magnetic moment (eg: The identity of the Bohr magneton for an electron..) So, the diffusion of the electron in space to create the magnetic moment must have an average value in an angular sense -- and because of inertial reference frames, helical paths must also be possible where the path is closed only on a moving reference frame.
What I have stated here, is the sum total of what I am certain of regarding magnetic moments; discussion of how these ideas fit with QM/Schrodinger's would be greatly appreciated.
Last edited: Jul 1, 2010
19. edguy99
edguy99 296
Gold Member
A quick sidenote, the June 10/2010 issue of Nature has a pretty good article:
"Electron localization following attosecond molecular photoioniztion" that starts with "For the past several decades, we have been able to probe the motion of atoms that is associated with chemical transformation and which occurs on the femtosecond (10^-15) timescale. However, studing the inner workings of atoms and molecules on the electronic timescale has become possible only with recent development of isolated attosecond (10^-18) laser pulses."
The attosecond timescale will (I think) prove to be very important in the history of the electron.
The object you may be thinking of is a Bloch Sphere, animations here. Although not directly derived from general relativity, It does properly predict most (if not all) of the properties of the proton. Its important features are an axis that points in a particular direction and a spin direction.
Proton MRI is generally modelled by the Bloch Sphere, animations here and has the right kind of magnetic moment that you would expect from this type of spinning object. Although these animations are showing the protons "precessing", in a solid object it may be just as likely that these guys are not precessing, but somehow building up energy and then flipping. The Bloch Sphere works great for protons and neutrons. For more elements with multiple protons and neutrons, you dont just have spin up/down or more commonly termed +1/2 or -1/2. Lithium-7 nucleon for example has 4 spin states labeled +3/2, +1/2, -1/2 and -3/2. It does not matter so much what the picture looks like, its just important that you label your objects with the right spin so you know how it reacts to the forces around it.
The electron at this level is more difficult. As I understand it, the Bloch Sphere does not work for it as the magnetic moment is slightly over 2 times too big to work (it takes too much energy for its size to flip it..) and there are many other problems. Electron flipping is the basis of atomic clocks and I think many of the properties have come about to explain this.
The Lorentz radius is: "In simple terms, the classical electron radius is roughly the size the electron would need to have for its mass to be completely due to its electrostatic potential energy - not taking quantum mechanics into account." and comes out to 3 femtometers. I think that this much mass in a 3 femtometer radius is spinning at something like 100 times the speed of light. Ie, the electron looks too small to fit that much energy in.
You commented "A dipole's detectable field scales as r**-3, where r is somewhat ill defined" and "the product of area enclosed to velocity of circulating charge (or E-field disturbance), is then constant if one is given a constant magnetic moment "
An important structure to is consider are twistors (one animated here, ignore the bottom row as it is a work in progress and sometimes you have to click a couple of times to get them to run right). It has a defined axis and it has the 720 spin (watch carefully as the blue dot goes through 360 degrees, the electron ends upside down and must go another 360 degrees to get right side up). It would pack a lot more energy in a lot less space and has kind of a "wierd" layout of its dipole field...
I have assumed spin flips do not apply to the SG experiment. There is no reason why the valance electron would not simply align itelf with the external field and slowly pull the atom in one direction or the other depending on whether it was up to down.
Dont know if this helps much, but I have to get back to trying to find the speed of the potassium atoms..
20. Three years ago I was speaking with a physics prof. about the attosecond timescale, funny how it is just "news" now....
With such short pulses, the wavelength/frequency is not very precise. Many experimenters seem to be trying to excite the electron into "wave packet" localized states for study at present. I hope your prediction holds well over time.
Thanks, I'll look at those -- the block sphere is a state-space animation, not so much a physical space animation from what I gather so far. I expect to take a few days sorting through ideas to get a feel for what they are about, and also reviewing a book I bought on introductory QM (2nd ed. by David J. Griffiths) which my college uses in classes for physics majors (eg: I bought it on a whim to help refresh my memory and extend my BSEE knowledge...).
Oh ... m = E/c**2.
but, electrostatic potential energy ? that would be the repulsion/attraction between charges; do you mean the energy inherent in a magnetic dipole moment, or am I overlooking something obvious?
(not that it matters since the result is impractical anyway.)
OK, I see. That's going to take a little time to sink in.
That's a problematic assumption;
The angle that the electron makes with respect to the magnetic field is arbitrary when ejected from the oven. The magnetic field, then, applies a torque to the magnetic dipole attempting to align it with minimum energy in the field. The quantized spin must be "chosen" / "observed" by interaction with the magnetic field, and thus "choose" spin up or down. In essence, I envision that it must "FLIP" from an random analog angle to a quantized one either aligned or anti-aligned (unless it happens to be aligned from the start which is highly improbable).
look at a classic angular momentum demonstration:
In the classic example, when the person's hands apply a large amount of torque the gyro-wheel re-orients its axis of rotation quite rapidly -- however, when a smaller force is applied (eg: the free hanging state) the wheel will precess for quite some time with very minimal change in rotation axis.
In the SG experiment, one has to determine which behavior will result from the magnetic field (or to what degree the axis will rotate / flip ) from the initial condition which has the orientation of spin randomly and continuously assigned an angle relative to the magnetic field.
In the demonstration -- I am not entirely certain that the wheel would have flipped at all without friction reducing the speed of the wheel, however -- if friction is not the main player in the effect, then a Bohr magneton can also be expected to slowly orient itself toward the magnetic field with time in addition to precessing.
There is nothing stopping the "valence" electron from orienting itself relative to the nucleus very quickly -- however, during this orientation, I would not expect the rotational axis to change much. That is, the electron will translate easily relative to the nucleus -- but like a gyroscope -- it not change its rotation "angle" to change it's position relative to the nucleus.
It is a good start; thank you; though it does add questions...
I think the average (non-relativistic) speed, due to temperature, is simply:
v = sqrt( 3*Kb*T / m )
Kb=boltzman constant.
T=temperature (kelvin)
m=mass of a Potassium atom.
I am not sure of the statistical spread (variation/deviation) of speed off the top of my head; but you can at least get an error slope %speed chg vs. temperature change (degree K or C) which would be useful.
21. edguy99
edguy99 296
Gold Member
I have a gyroscope just like this and I find it much easier to compare with a bloch sphere. I like how it spins with its axis straight up (or slightly askew as it precesses under gravity) whereas the bicycle wheel has a horizontal axis relative to gravity.
Many thanks for this, for some reason I thought I had to know much more to calculate the speed.
Last edited by a moderator: Sep 25, 2014
Have something to add? |
96f870207a3a5bf6 | Philip Ball says that physics has nothing to do with free will. Part 2.
January 11, 2021 • 9:30 am
Yesterday I discussed a recent article from PhysicsToday by Philip Ball, which you can access by clicking on the screenshot below. I argued, and will continue to argue, that Ball’s attacks on free will are misguided for several reasons. He fails to define free will; does not seem able to distinguish between predictability and determinism; does not appreciate that naturalism (determinism + quantum uncertainty) absolutely destroys the libertarian notion of free will held by most people (and nearly all Abrahamic religionists); and has confused notions of “causation”. Today I’ll briefly discuss the last point, as well as Ball’s misguided claim that accepting naturalism has no implications for our behavior or ways of thinking.
First, let’s review. Ball accepts the laws of physics as being the underlying basis of all phenomena, and so he is a naturalist (or a “physical determinist” if you will; I’ll simply use “determinism” to mean “naturalism”). But he then argues that this kind of reduction of everything to physics renders behavioral science a straw man. I find that claim bizarre, for even we “hard determinists” recognize that we can’t say much meaningful about social behavior from the laws of physics alone. But our recognition of that doesn’t mean, as Ball asserts it does, that disciplines like history, game theory, and sociology become “pseudosciences”.
First, none of us think that: we recognize that meaningful analysis, understanding, and even predictions can be made by analyzing macro phenomena on their own levels. So this paragraph is arrant nonsense, attacking a position that almost nobody holds:
If the claim that we never truly make choices is correct, then psychology, sociology and all studies of human behaviour are verging on pseudoscience. Efforts to understand our conduct would be null and void because the real reasons lie in the Big Bang. Neuropsychology would be nothing more than the enumeration of correlations: this action tends to happen at the same time as this pattern of brain activity, but there is no causal relation. Game theory is meaningless as no player is choosing their action because of particular rules, preferences or circumstances of the game. These “sciences” would be no better than studies of the paranormal: wild-goose chases after illusory phenomena. History becomes merely a matter of inventing irrelevant stories about why certain events happened.
Ball is correct in saying that meaningful analyses in these areas can be conducted without devolving to the level of particles. But that’s nothing new! Further, he seems to misunderstand the meaning of “pseudoscience”. The Oxford English Dictionary defines pseudoscience this way:
“A spurious or pretended science; a branch of knowledge or a system of beliefs mistakenly regarded as based on scientific method or having the status of scientific truth.”
But in fact, all those areas above, from sociology to neuropsychology, often use the scientific method: the empirical toolkit also used by biology, chemistry, and so on. If they find “truth” by observation, testability, attempts at falsification, and consensus, then they are “science in the broad sense” and not pseudoscience. They are using methods continuous with the methods used by “hard” scientists to find truth.
Second, by his very admission of physical determinism, Ball already settles the issue of free will: we don’t have it, at least in the libertarian sense. His statement below gives away the game:
And that’s pretty much all I care about. I don’t care whether, given you’ve accepted determinism, you go on to play the semantic game of compatibilism (Ball doesn’t). For it’s determinism itself that, when accepted, has profound consequences for how we view life and society. Many disagree, but so be it. One of those who disagrees, though, is Ball (see below).
Ball makes three more points that I’ll discuss here. The first involves “causation”. Because we can’t understand social behavior, or, in this case, the evolution of chimpanzees, from principles of physics, one can’t say that physics “caused” the evolution of chimpanzees. We need another level of analysis:
To account for chimps, we need to consider the historical specifics of how the environment plus random genetic mutations steered the course of evolution. In a chimp, matter has been shaped by evolutionary principles – we might justifiably call them “forces” – that are causally autonomous, even though they arise from more fine-grained phenomena. To complain that such “forces” cannot magically direct the blind interactions between particles is to fundamentally misconstrue what causation means. The evolutionary explanation for chimps is not a higher-level explanation of an underlying “chimpogenic” physics – it is the proper explanation.
Again I assert that, at bottom, the evolution of chimps was “dictated” by the laws of physics: the deterministic forces as well as the random ones, which could include mutations. (I’ve argued that the evolution of life could not have been predicted, even with perfect knowledge, after the Big Bang, given that some evolutionary phenomena, like mutations, may have a quantum component.)
But if Ball thinks biologists can figure out what “caused” the evolution of chimps, he’s on shaky ground. He has no idea, nor do we, what evolutionary forces gave rise to them, nor the specific mutations that had to arise for evolution to work. We don’t even know what “caused” the evolution of bipedal hominins, though we can make some guesses. We’re stuck here with plausibility arguments, though some assertions about evolution can be tested (i.e., chimps and hominins had a common ancestor; amphibians evolved from fish, and so on). And yes, that kind of testing doesn’t involve evoking the laws of physics, but so what? My work on speciation, Haldane’s rule, and so on, is perfectly compatible with my hard determinism. I would never admit that my career in evolutionary genetics, in view of my determinism, was an exercise in “pseudoscience.”
At any rate, Ball and I do agree that evolutionary scenarios like this require a level of analysis removed from that of particle physics, and also a language (“mutations”, “selection”, “environmental change”, and so on) that differs from the language used by physicists. Again, so what? We already knew that.
Second, Ball floats the idea of “top down” causation, something I don’t fully understand but, as far as I do understand it, it doesn’t show that macro phenomena result from the laws of physics, both deterministic and indeterministic, acting at lower levels. To me the concept is almost numinous:
There is good reason to believe that causation can flow from the top down in complex systems – work by Erik Hoel of Tufts University in Massachusetts and others has shown as much. The condensed-matter physicist and Nobel laureate Philip Anderson anticipated such notions in his 1972 essay “More is different” (Science 177 393). “The ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe,” he wrote.
I’ll let readers argue this out, but if physicists like Sean Carroll and Brian Greene are not on board with this—and as far as I know, they aren’t—then I have reason to be skeptical.
Finally, Ball appears to think that understanding and dispelling the idea of free will has absolutely no implications for anything:
Those who say that free will, and attendant moral responsibility, don’t exist but we should go on acting as if they do rather prove that their position is empty because it neither illuminates nor changes anything about how we do and should behave.
This is not at all an empty position, not just because it shows that our feeling of agency isn’t what it seems to be (in that sense it’s an “illusion”), but also because the absence of libertarian free will changes a lot about how we view the world. As I’ve argued, it changes our view of how we see punishment and reward, how we regard those people who are seen as “failures in life,” and how we see our own tendency to regret our past behaviors, and wish we’d done otherwise. If you see that people aren’t really in control of their lives, at least in the sense of exercising a “will” that can affect how you decide at a given moment, then it makes you less retributive, more forgiving, and less hard on yourself.
Now I know some readers will say that to them it doesn’t matter. Whether or not we have libertarian free will, or compatibilist free will, they argue, doesn’t matter: the drive to reform prisons will be the same. I don’t agree. And the claim that how one sees libertarian free will affects one’s view of life is supported by statistics showing that if people thought they really lived in a world ruled by the laws of physics, with no libertarian free will, they would believe that moral responsibility goes out the window. (I sort of agree: I still think people are “responsible” for their actions, but the idea of “moral” responsibility is connected with “you-could-have-chosen-to-do-otherwise.”) At any rate, people know instinctively that the common notion of free will has important consequences for themselves and society.
And thus, brothers and sisters, friends and comrades, I endeth my sermon on the lucubrations of Brother Ball.
Philip Ball says that physics has nothing to do with free will. Part 1.
January 10, 2021 • 2:00 pm
I’m getting tired of writing about free will, as what I’m really interested in is determinism of the physics sort, and, as far as we know, determinism is true except in the realm of quantum mechanics—where it may still be true, but probably not. So let me lump quantum mechanics and other physical laws together as “determinism,” recognizing that predictability may be nil on the quantum level. If you wish, call determinism “naturalism” instead. So we can say that “naturalism”, physical law, is true.
Further, as we know, admitting some uncertainty on the particle level does not mean that, even if our behavior is governed by the laws of physics, we could have chosen to do something other than what we did. For physical uncertainties have to do with particle movement, not with the amorphous “will”—the supposed ability of our mind to force our bodies to do different things—that’s essential to most people’s idea of free will.
Anthony Cashmore defines free will “as a belief that there is a component to biological behavior that is something more than the unavoidable consequences of the genetic and environmental history of the individual and the possible stochastic laws of nature”. A simpler but roughly equivalent definition is this one: “If you could replay the tape of life, and go back to a moment of decision at which everything—every molecule—was in exactly the same position, you have free will if you could have decided differently—and that decision was up to you.”
If you pressed most people, you’d find that they agree with these definitions, though the second one is clearer to the layperson. These forms of “libertarian” free will are accepted by many, including of course, those religionists who believe that we are able to freely decide whether or not to accept Jesus or Mohamed as the correct prophet, and if you make the wrong choice, you’ll fry. Only a loony Christian would argue that God would still make you fry if a quantum movement in your neurons made you reject Jesus. No, your “decisions” have to be under your control.
At any rate, physics—naturalism—rules out this type of free will.
I’m pretty sure science writer Philip Ball would agree that the laws of physics are true. But he also argues, in a recent op-ed piece in PhysicsWorld (click on screenshot), that free will has nothing to do with physics. I was going to discuss his piece in one post, but it would be too long, and I hear that goldeneye ducks are disporting themselves in a pond over near Lake Michigan, and I must go see them. I’ll continue this analysis in a final post tomorrow.
Now Ball doesn’t really define “free will”, he says that it is “not a putative physical phenomenon on which microphysics can pronounce—it is a psychological and neurological phenomenon.”
There are two issues in that sentence. First, is free will a physical phenomenon or not? Yes, of course it is, in the sense that all human behaviors are physical phenomena that come from our evolved sensory system and neuronal wiring interacting with our environments, and all of this must ultimately be consistent with the laws of physics.
As far as “pronounce” goes, well, no, we can’t predict with complete accuracy what someone will do, for we lack that depth of knowledge. But we’re getting closer to the “pronouncement” part, as we can often predict with better than even accuracy, via physical interventions or brain monitoring, what someone will do or “choose”. And we can also affect one’s sense of volition by interventions (Ouija boards are a familiar example.)
But to say that psychological and neurological phenomena are different from physical phenomena is nonsense. The phenomena are viewed and analyzed in different ways and on different levels, of course. As Ball argues, we don’t use the laws of physics to help understand or predict human behavior—yet. But that doesn’t mean that determinism doesn’t operate, and that somehow one could have behaved, through one’s own “will”, otherwise than one did. If you could, then our will would truly be “free” of physical constraints.
Ball seems to think that although we can’t use physics to predict our behavior, determinism and “naturalism” are therefore irrelevant to our lives. But they aren’t, for, as many of us agree, including Sam Harris, Anthony Cashmore (read his paper), and many others, accepting determinism can have profound effects on how one sees and wants to structure society. I know many here will argue against that, but surveys of the public show that a). they don’t accept determinism, with most accepting libertarian free will, and b). they realize that accepting determinism affects one’s view of morality and responsibility. (Sadly, they usually think that in a deterministic world there can be neither morality nor responsibility; and they’re wrong.) But I’ll talk about that tomorrow.
But I’m getting ahead of myself. Click on the screenshot below. I’ll just reproduce a few of Ball’s assertions (indented) and comment on them (my words flush left).
To start, I have to agree with Brian Greene, whose take on free will, which I see as correct, is denigrated by Ball:
In his new book Until the End of Time, the US theoretical physicist Brian Greene says that our choices only seem free because “we do not witness nature’s laws acting in their most fundamental guise; our senses do not reveal the operation of nature’s laws in the world of particles”. In his view, we might feel that we could have done otherwise in a particular situation, but, short of some unknown psychic force that can intervene in particle motions, physics says otherwise.
Greene, like many others who take this view, is upbeat about it: free will is a perfectly valid fiction when we’re telling the “higher-level story” of human behaviour. You can’t change anything that will happen, but you should merrily go on thinking and doing “as if” you can with all the attendant moral implications. Maybe this picture works for you; maybe it doesn’t. But in this view, you have no say about that either.
Metaphysical? It’s metaphysical to say that underlying our behavior are unalterable laws of physics? (Screw “cause and effect” for the moment, as they nebulous, philosophical, and irrelevant to determinism.) If you are a determinist, then there’s no way you can accept libertarian free will. My goal is not to engage in semantic arguments about what free will can be in a deterministic workd, but to ask a scientific question: is there anything we know about science that tells us that we can “will” ourselves to behave differently from how we did? The answer is no. We know of nothing about physics that would lead to that conclusion.
Ball then proceeds to construct what I see as a strawman:
Perhaps that is the bitter truth. Why should we sacrifice physics just to save the face of other disciplines? But let’s consider the alternatives. Understanding decisions and behaviour through psychology allows us to form hypotheses and test them empirically. Some of these look as though they’re right: we can reliably predict what might make people change their behaviour, say. If, however, physics demolishes free will, this is just a peculiar coincidence. Forget all the “as if” gloss: reducing all behaviour to deterministic physics unfolding from the Big Bang offers us no genuine behavioural science at all, as it denies choice and puts nothing in its place that can help us understand and anticipate what we see in the world.
. . . It is not because of the sheer overwhelming complexity of the calculations that we don’t attempt to use quantum chromodynamics to analyse the works of Dickens. It is because this would apply a theory beyond its applicable domain, so the attempt would fail. Greene presents the matter as a hierarchy of “nested stories”, each level supplying the underlying explanation of the next. But that’s the wrong image. To regard every form of human enquiry, from evolutionary theory to literary criticism, as a kind of renormalized physics is as hubristic as it is absurd.
This is a strawman because none of us deny that there can be behavioral science, and that one can study many aspects of human biology, including history, using the empirical tools of science: observation, testing, falsification, and a search for regularities. It’s also a strawman because the issue is one of physics underlying human behavior, whether or not we can use it to predict that behavior.
But behavioral science isn’t necessarily “pseudoscience”. Regularities can be tested, confirmed, or refuted—as Ball admits. For example, I would predict that if a madman approaches a playground with a gun, a parent would first rush to save their own child rather than somebody else’s. That’s derived from kin selection theory. Or you can predict that if someone accumulates more desirable things in a lab experiment, like donuts, the value of an additional donut—its marginal utility—will diminish. That’s economics, but comes from selfish human behavior. And we can predict that if we show someone that their wife is having sex with another man, that person will get angry and jealous. That is NOT a “peculiar coincidence”, but also derives from evolution, as does much of human behavior that we see in our striving for repute, power, wealth, and status.
And history is surely not a “matter of inventing stories” about why certain events happened. True, we often don’t know for sure, as we can’t easily determine historical causation, but I’m sure historians wouldn’t see themselves as “making up stories”. If, for example, you think famous person A did X because he knew that Y was true, and you find out that that person A didn’t really know that Y was true but thought that Z was true instead, you’ve falsified a historical argument. Historical records exist to check assertions of fact. That’s why we know that the “Bethlehem census” of the Bible is wrong, and why the claim that the Holocaust was just prisoners dying of disease and not deliberate extermination is equally wrong.
To say that any behavioral regularities we see are mere “peculiar coincidences” is a claim that evolution itself has nothing to do with physics. For it’s evolution that’s at the base of so much of behaviorial science. And evolution results from the differential replication of different genes, which become different via mutations, which are of course physical phenomena.
It’s “not even wrong” to say that determinists who reject the idea that our “will” can interact with our bodies are at the same time claiming that history, game theory, economics, and other forms of “social science” are all pseudosciences. If you know what a pseudoscience really is: a belief system that rejects testing and falsification by the methods of “real science” and is buttressed by confirmation bias, then you wouldn’t make the statements that Ball does. He’s arguing against a view that nobody holds.
And now I must go see my goldeneye ducks. More tomorrow. Read Ball’s article.
h/t: David
Once again we’re told we have free will, and once again it makes no sense.
December 20, 2020 • 11:00 am
At any rate, Oliver Waters, writing at Medium, assures us that we do have a form of libertarian free will—or so it seems. I say “seems” because he presents an argument based on “critical rationalism” that makes no sense to me. I’ll criticize it a bit, but I can sense some flak coming of this type: “You need to read many volumes about critical rationalism before you can criticize my argument.” Sorry, but I won’t, for if an author can’t give a sensible argument in a reasonably long piece, it’s hopeless.
Click to read:
I can’t find out much about Oliver Waters save his Medium biography, which says this: “Philosophy, psychology, economics and politics. Tweets at @olliewaters.” But that doesn’t matter, for it’s his arguments for free will that are at issue.
Right off the bat Waters defines free will in a wonky way—one I disagree with. It implies—and this is fleshed out in the rest of the article—that he believe that determinism is not mandating our decisions: that there are “real choices” independent of the laws of physics, and not just the fundamentally indeterminate bits like quantum mechanics, either. No, we can really make choices, choices constrained by physics, but not determined by them. But I digress. Here’s how Waters defines “free will”:
Roughly speaking, ‘free will’ denotes our capacity to think in ways that no other known creature can. We alone are capable of considering reasons (as you are doing right now) rather than merely reacting to the world via genetically fixed mechanisms. As philosopher J.T Ismael phrases it, we humans enjoy ‘metacognitive awareness’ and an ‘extended autobiographical self’. We are therefore able to consciously imagine future possibilities and play a role in causing which become our reality.
No, what he means is that humans are the only species that can say and articulate that they have reasons. In fact, our “reasons” are simply the weights that our neural computer programs give to various environmental and endogenous inputs before they spit out a decision. Animals do the same thing: they take in inputs, run them through the brains, and decide whether to flee, to pursue a prey, to mate with a member of the opposite sex, and so on. They have reasons, though they can’t articulate them. When a crow caching food sees other crows watching, and then digs up the food and reburies it elsewhere, does it not have a “reason”: other crows could steal their food. Does it realize that? Well, we don’t know, but it looks exactly like the reasons we humans adduce for our actions.
Or a mallard hen might take a male as a mate because he has particularly bright feathers. Is that not a “reason” she chose? Maybe she can’t ponder it, but so what? Our ponderings are merely post facto rationales for adaptive brain programs instilled in us by millions of years of natural selection. It’s the program that decides, and we can pretend that we decided independent of our determined outputs. No, “considering reasons” is, to me, a ludicrous definition of free will, and certainly not one necessarily limited to humans. (Do we really know what goes through the mind of an ape or a fox when it does something?)
In addition, just because we say we have reasons does not mean that those reasons are the real impetus behind what we do, or are reasons that could, at the time, be contradicted by different reasons. We can consider alternatives (or rather, our brains can “weigh” them by letting the dominant pathway “win”), but the one we wind up doing or thinking is not “free” in the sense that one could at the time use different reasons to arrive at a different output.
Enough. Waters then defines “critical rationalism” in a way that comports with his definition of free will, but also in a way that doesn’t at all distinguish it from the weights that an evolved and plastic system of neurons gives to different inputs before spitting out an output: a “decision”, a behavior, a thought, or a statement:
The core of critical rationalism is that all knowledge progresses via a process of ‘conjecture and refutation’. Thinking agents face problems, which are conflicts among their existing ideas, and seek to resolve these problems by detecting and eliminating cognitive errors. Overcoming these errors requires creatively generating new, better ideas.
As such, critical rationalism rejects ‘empiricism’, the notion that we derive our knowledge from sensory information. Empiricism depends on induction, the notion that learning about reality is akin to ‘curve fitting’ from given data points, which we can then extrapolate to predict the future or postdict the past. Popper rejected the principle of induction as logically invalid. We cannot assume the future will be like the past: instead we must conjecture testable explanatory theories about how reality works.
The second paragraph is arrant nonsense, because of course the brain takes in all kinds of sensory information before it executes its programs. When you see a lion coming, you run. When you see it’s raining, you put up an umbrella. Much of evolution, in fact, like bird migration, is based on the assumption that the future will be like the past. But lt us forget the nonsense about not getting information from the environment and concentrate on the first paragraph.
That, too, seems absolutely the same as “running a brain program evolved to increase your fitness” (brain programs can of course be fooled, as with optical illusions, plastic surgery, and so on). The “resolution” is not something that your “will” does independently of the laws of physics; it’s something that your brain does according to the laws of physics and the natural selection—also operating according to the laws of physics—that has molded our brain programs to buttress our survival and reproduction. While “creatively generating new, better ideas” sounds like we are free to generate those ideas, we’re not. It’s your brain working things out according to the laws of physics. So far I haven’t seen anything about Waters’s will that is free. What I see is a post facto description of brain programs treated as if they instantiated libertarian free will.
Waters then makes the common mistake of saying that the laws of physics can’t explain everything because it’s not the level of description we use when giving reasons. We say, “The U.S. and U.K. won World War II because they had bigger populations and better factories—and developed the atomic bomb.” And yes, that’s true, but those underlying reasons themselves are the result of the laws of physics, and must be compatible with the laws of physics. Only a moron would try to explain why we won the war on the basis of molecules. But that’s not the issue. The issue is whether it was inevitable that we won the war because the laws of physics interacted to make that result happen.
Here’s Waters’s example in which the “wrong level of explanation” is used to support libertarian free will and refute determinism:
Notice that this conception of explanation is ‘scale-invariant’ in that it doesn’t arbitrarily privilege low-level explanations over high-level ones, or concrete phenomena over abstractions. For instance, explaining Brexit via the movement of atoms according to the physical laws of motion is clearly a bad idea. This is because the best explanations for Brexit must invoke ‘emergent’ phenomena like ‘nationalism’ and ‘democracy’ , which are consistent with many different atomic arrangements.
One way to think about this is to ask whether Brexit would have occurred differently if God went back and messed with the atoms in Nigel Farage’s tea every morning. It turns out that the precise locations and momentums of these atoms didn’t matter at all in influencing the outcome. Indeed, you can say the same thing about the atoms in his brain. After all, our brains only work as they do because the chaotic motion of their constituent atoms are locked into groups of molecules, cells, and circuits. These processes allow for coherent thoughts about the future of Britain to persist long enough to communicate with other brains.
In short, micro-physical fluctuations didn’t cause Brexit. Ideas did. ‘Physical reductionists’ rule out such higher-level causes by fiat, and so must deny this reality, but critical rationalists need not. They can be perfectly comfortable with the notion that many of our actions are truly caused by our consciously held ideas, not by neuronal firings to which we’re completely oblivious.
But what are “ideas” except the output of neurons, which themselves are chemical and physical entities that emit electrical signals. You can say the “cause” is those signals, which gave rise to the ideas, or the “cause” is a misguided campaign by Brexiteers, but the latter comes down to the former. The last sentence about “critical rationalists” is just a flat assertion without evidence. Ideas are patterns of neuronal firings that come to consciousness, and any idea corresponds to one or more patterns of neuronal firings.
This is where Waters goes astray when asserting that determinism isn’t so great because there are many different underlying molecular events that could give rise to the same large-scale outcome—like Brexit. It may indeed be true that changing the molecules in Nigel Farage’s tea doesn’t affect his views on Brexit, but that’s because many different molecular configurations and physical events might map onto the same macro result. I may drive to the grocery store via Cottage Grove, or perhaps via 59th Street, but the groceries I buy will be the same.
Waters’s closing is completely confusing to me, for he seems to accept determinism and libertarian free will at the same time:
We need not think about the fundamental laws of physics as rails directing reality along a rigid trajectory. Rather, we can think of them as constraints on what kinds of physical transformations are possible and impossible. This richer notion of physical explanation is currently being developed by Deutsch and Chiara Marletto in the project of ‘Constructor Theory’.
Famous ‘free will sceptics’ like Jerry Coyne and Sam Harris are rightly worried about ditching the concept of physical determinism. In their view, the only alternative is a mysticism allowing for all kinds of silly miracles and supernatural beings. But such concerns are not warranted under the ‘constructor theoretic’ conception. According to this, we still live in a universe governed by timeless, fixed laws — it’s just that these laws do not dictate by themselves how exactly the future will unfold.
The physical laws that make it possible for us to be conscious and creative human beings, making real choices about what will happen next, are the very same laws that rule out Jesus spontaneously converting water into wine, or rising from the dead.
So if the laws of physics are merely constraints, and decisions can stray outside them, what makes those decisions jump the rails of physics? Waters gives us no clue, but it must be something mystical or non-physical, regardless of his claim that he doesn’t think that. If “the laws of physics do not dictate by themselves how exactly the future will unfold,” then what must we add to them to understand how the future will unfold? What is the sweating professor trying to say?
Waters doesn’t clarify. And I’m not sure if even he understands. All I know is that I don’t, and that’s not my fault.
h/t: Jiten
November 15, 2020 • 11:30 am
h/t: Paul
Sabine Hossenfelder says we don’t have free will, but its nonexistence shouldn’t bother us
October 11, 2020 • 1:00 pm
Here we have the German theoretical physicist, author, and science popularizer Sabine Hossenfelder giving an 11-minute talk called “You don’t have free will, but don’t worry”. (My own talk on the subject is the first five words she uses, and I think we should be concerned—though not in the sense she means.) The video and a written transcript are on her website Backreaction.
If you’ve read this site, you’ll know that my own views are pretty much the same as hers, at least about free will. We don’t have it, and the fundamental indeterminacy of quantum mechanics doesn’t give it to us either. Hossenfelder doesn’t pull any punches:
She adds this about quantum mechanics, which used to be a life preserver used to rescue the notion of “freedom”, but has largely been abandoned because with two seconds of thought you see that it doesn’t give us any freedom of the will:
Now note that she hasn’t actually defined free will so far, but later on she dismisses the concept that most people, including me, adhere to (my emphasis):
Now, some have tried to define free will by the “ability to have done otherwise”. But that’s just empty words. If you did one thing, there is no evidence you could have done something else because, well, you didn’t. Really there is always only your fantasy of having done otherwise.
I don’t agree here, for the “could have done otherwise” definition of free will is the one that most people adhere to, and the “otherwise” comes not from physical randomness but from will. In fact, Hossenfelder doesn’t even agree with herself, for shortly thereafter she implicitly defines free will this way—after having disposed of a few varieties of compatibilism (again, my emphasis):
I also find it unenlightening to have an argument about the use of words. If you want to define free will in such a way that it is still consistent with the laws of nature, that is fine by me, though I will continue to complain that’s just verbal acrobatics. In any case, regardless of how you want to define the word, we still cannot select among several possible futures. This idea makes absolutely no sense if you know anything about physics.
Here she implicitly defines free will as whatever facility enables us to “[select] among several possible futures,” and that’s the notion she refutes. I’m not sure why this idea is any more “empty words” than is “the ability to have done otherwise”.
At any rate, she goes on to conclude that the absence of free will doesn’t mean that our moral behavior will erode. I agree, of course. I think it means our “moral responsibility” disappears, for to me “moral responsibility” comes with the notion of “having an ability to make the ‘right’ choice”, an ability that doesn’t exist. I think we are responsible for our acts in the sense that it is our brains that have produced them, and thus for many reasons we should either be punished or rewarded. If you want to say “we are responsible because we have either transgressed or supported the acts society considers ‘moral'”, I’m not going to beef.
Hossenfelder concludes by reiterating that free will is “nonsense” and that “the idea deserves going into the rubbish bin.” True, that. But that doesn’t mean that we can’t be happy, for we have the illusion of free will, and we can use that as a crutch to go through life. She even suggest a psychological trick for being happy:
If it causes you cognitive dissonance to acknowledge you believe in something that doesn’t exist, I suggest that you think of your life as a story which has not yet been told. You are equipped with a thinking apparatus that you use to collect information and act on what you have learned from this. The result of that thinking is determined, but you still have to do the thinking. That’s your task. That’s why you are here. I am curious to see what will come out of your thinking, and you should be curious about it too.
Why am I telling you this? Because I think that people who do not understand that free will is an illusion underestimate how much their decisions are influenced by the information they are exposed to. After watching this video, I hope, some of you will realize that to make the best of your thinking apparatus, you need to understand how it works, and pay more attention to cognitive biases and logical fallacies.
I’m not sure how it helps to realize that “you have to still do the thinking”, when in reality the thinking is doing itself! Just because we don’t know what will happen—that our predictability is not so hot—doesn’t make us any less a bunch of meat robots who are slaves to the laws of physics. I know this, and yet I’m tolerably happy (for a lugubrious Jew). We know our “choices” are illusions, and my realization that these illusory choices come from a brain embedded in the skull of one Jerry A. Coyne does not give me the consolation Hossenfelder promises. But I still beat on, a boat against the current.
One more point: I’m not sure why compatibilists don’t just admit what Hossenfelder does instead of trying to find a definition of free will that people do have. The physicist Sean Carroll and philosopher Dan Dennett have taken that route, which I call the Definitional Escape rather than Hossenfelder’s There’s No Escape but Isn’t it Cool to Not Know what Comes Next.
The one thing I think Hossenfelder neglects comes from her last paragraph. If we do understand that free will in the Hossenfeldian sense is illusory, that has enormous consequences for the judicial system and for how we think about people who are either more or less fortunate than we are. I won’t dilate on this as I’ve discussed it to death. But yes, realizing that our brains are particles and obey the laws of physics should cause us worry—worry about how we treat prisoners and those who are mentally ill, and worry about how some people hold others responsible for making the “wrong choices.”
That aside, I applaud Dr. Hossenfelder for realizing the truth, which, as she says, is the ineluctable outcome of science, and for saying it so straightforwardly. I’m a big fan of hers. And I applaud myself for agreeing with her.
h/t: Andrew
Nobel Prize in Physics goes to three for showing that formation of black holes is predicted by relativity theory
October 6, 2020 • 6:15 am
This morning the Karolinska Institute awarded the 2020 Nobel Prize in Physics to two men and a woman—Roger Penrose, Reinhard Genzel, and Andrea Ghez—for work on black holes. As the press release notes:
Penrose got half the prize, with Genzel and Ghez sharing the other 50%.
My Nobel Prize Contest (see here and here) is already a big flop this year, with nobody guessing even one person from each of the two sets of winners so far. Reader ThyroidPlanet, though, did guess Penrose for physics.
The Chemistry prize will be announced tomorrow, and the Literature prize on Thursday.
Below is a video of this morning’s announcement featuring Professor Göran K. Hansson, Secretary General of the Royal Swedish Academy of Sciences. The action begins at 26:15, with the announcement in both Swedish and English. At 33:15, David Havilland, chair of the Nobel Committee for Physics, and Professor Ulf Danielsson explain the significance of the discovery.
George Ellis responds to my criticism of his argument for free will
June 15, 2020 • 9:30 am
Yesterday I posted a critique of an Aeon article by physicist George Ellis, arguing that science itself gives evidence for true libertarian free will. This rests on his claim that psychology exerts a “top-down” effect on molecules, and those top-down effects, because they stem from our thinking, our experiences, and our personalities (all subsumed under “our psychology”), constitute libertarian free will. (He didn’t say exactly how the top-down stuff gives a non-physical “agency” to people, but merely suggests that it’s a way to think about it.)
For once most readers agreed on my take, mainly because those readers who aren’t hard determinists like me still accept the laws of physics, while Ellis seems to argue that the “top down” influence on our molecules, and hence our behavior and “choices”, cannot be reduced to physics, and in fact is free of the laws of physics.
My response was brief. Your personality and character—the “top”—are formed by changes in your brain induced by your genes, your environment, and all the experiences you have. And those changes are ultimately molecular changes that affect neurons. And those changes obey the laws of physics. There is no “top” free from the laws of physics. (I add here that Ellis won the Templeton prize for harmonizing science and religion, which may go some way towards his promotion of a free will that seems quasi-religious, but certainly seems dualistic.)
Reader Steve commented on that post, saying “I asked Ellis to read your post and reply. Here’s what he said:″
I consider this an inadequate answer, but I won’t engage with Ellis’s ad hominem argument that “it’s a typical Jerry Coyne response.” I’ll address his comment on the “core issue.” And that is Ellis’s claim that physical things like electrons and “psychological” things like feelings, emotions, and behaviors, are two different things; in fact, they are two entirely different things, and you can’t understand “psychology” using molecules.
Perhaps we’re not at the stage where we can predict the effects that the environment or brain molecules have on behavior, but we’re not completely clueless, either. Should Ellis doubt this, ask him to imbibe a few stiff bourbons and see if there aren’t predictable results. Or give him a course of testosterone and see how it affects his behavior. As many people, including me, have indicated, there are plenty of experiments showing that one can affect one’s decisions, one’s beliefs,—and, indeed, one’s sense of agency—through physical manipulations of the brain, whether they be by experimenters or disease.
In contrast, as Sean Carroll has emphasized repeatedly (see the article and tweet below) not only that there’s no evidence for physics-free top-down causation, but, indeed, there’s evidence against it from the laws of physics. There is no way we know of for nonphysical thoughts to influence physical processes.
The solution, of course, is the parsimonious and evidenced idea that thoughts and feelings are the results of the laws of physics, combined, of course, with the evolution that helped program our brain to (usually) behave adaptively. When Ellis says “the true statement is that electrons interacting allow and enable the thoughts to take place at the psychological level,” he might as well say, “the electrons (and other particles and molecules) are what make the thoughts take place at the psychological level.” Then the influential “top” goes away.
Here are some writings by Sean Carroll on the intellectual vacuity of downward causation. Click on screenshot below.
Another paper claiming (but failing) to give evidence for libertarian free will
June 14, 2020 • 9:00 am
UPDATE: Reader Coel pointed out in the comments—this had escaped my notice—that the author of this piece, George Ellis, won the Templeton prize in 2004 for efforts in harmonizing science and religion. This may be relevant to the article below.
Part of his citation says this:
Beyond ethics, Ellis contends that there are many areas that cannot be accounted for by physics. “Even hard-headed physicists have to acknowledge a number of different kinds of existence” beyond the basics of atoms, molecules and chemicals, he said in his prepared remarks. Directly challenging the notion that the powers of science are limitless, Ellis noted the inability of even the most advanced physics to fully explain factors that shape the physical world, including human thoughts, emotions and social constructions such as the laws of chess.
A lot of people sent me this link to an article in Aeon by physicist George Ellis, with some of them telling me that his piece deals the death blow to determinism and pumps life into the idea of free will. It doesn’t—not by a long shot. And anyone who has such a take doesn’t understand either determinism or Ellis’s arguments. For once you understand them, you see that they’re invalidated by a big fallacy.
First, here’s Ellis’s bona fides from the site (his Wikipedia bio is here):
George Ellis is the Emeritus Distinguished Professor of Complex Systems in the Department of Mathematics and Applied Mathematics at the University of Cape Town in South Africa. He co-authored The Large Scale Structure of Space-Time (1973) with Stephen Hawking.
First, let us be clear that although Ellis doesn’t define what he means by “free will”, it’s clear that he’s talking about libertarian, contracausal, “you-could-have-chosen-otherwise” free will”, not the compatiblist free will that accepts physical determinism. No, Ellis thinks that determinism is simply wrong.
And by “determinism”, I don’t mean that “with perfect knowledge of the present, or of the moment after the Big Bang, we could predict what would happen with 100% accuracy”. I don’t believe that, as there are some fundamentally non-deterministic processes—to our best knowledge at present, quantum mechanics comprises some of these fundamental unpredictabilities—that could, at a given time, act to create different futures. (Evolution might be one of these if the fuel for the process—mutation—is affected by quantum unpredictability. In that case, “rerunning the tape of life” from a given point could yield different outcomes.)
By “determinism”, I mean that the future is determined only by the laws of physics, not by some nonphysical “will” or “agency” that we can exercise. (Throughout his piece, Ellis conflates physical determinism with predictability, an odd stance for a physicist.)
The article, at six printed pages in Word in 9-point type, is very long, and larded with descriptions of the Schrödinger equation, ion channels, gene regulation, and biochemistry, all fancy science that could bamboozle the reader into thinking that Ellis’s view is backed by science. But it isn’t. In fact, his argument can be stated very simply, and I’ll try to paraphrase it:
Free will acts by human psychology changing the constraints that act on our brain molecules. Although physical processes are constrained by physical laws, psychology can override and change these constraints. Therefore, our minds, or our psychology, can somehow “reach down” to affect the molecular processes occurring in our brains and bodies. And these psychological processes, which are apparently themselves physically unconstrained, constitute free will.
Now that is my paraphrase, but I’ll support it with quotes from Ellis (any bolding is mine):
In the case of the biomolecules that underlie the existence of life, it’s the shape of the molecule that acts as a constraint on what happens. These molecules are quite flexible, bending around joints rather like hinges. The distances between the atomic nuclei in the molecules determine what bending is possible. Any particular such molecular ‘conformation’ (a specific state of folding) constrains the motions of ions and electrons at the underlying physical level. This can happen in a time-dependent fashion, according to biological needs. In this way, biology can reach down to shape physical outcomes. It changes constraints in the applicable Schrödinger equation.
. . . So what determines which messages are conveyed to your synapses by signalling molecules? They are signals determined by thinking processes that can’t be described at any lower level because they involve concepts, cognition and emotions in an essential way. Psychological experiences drive what happens. Your thoughts and feelings reach ‘down’ to shape lower-level processes in the brain by altering the constraints on ion and electron flows in a way that changes with time.
The phrase that our thinking “reaches down” to affect our molecules recurs often in this essay, implying a psychological “invisible hand”—”thinking processes”—that is the crux of Ellis’s argument. But wait! There’s more!
How does any of this happen? As the Austrian-American doctor Eric Kandel explained in his Nobel Prize Lecture from 2000, the process of learning at the mental level leads to changed patterns of gene expression, and so specific proteins being produced, which alter the strengths of neural connections at synapses. This changes the strength of connections between neurons, thereby storing memories.
Such learning is a psychological happening. You might remember your pleasure on eating a delicious meal, the details of a Yo-Yo Ma rendition of a Bach sonata, or the painful memory of the car crash. Once again, these are irreducible psychological events: they can’t be described at any lower level. They reach down to alter neuronal connections over time. These changes can’t be predicted on the basis of the initial state of the neural connections (your neurons did not know that the car crash was about to happen) – but, afterwards, they constrain electron flows differently, because connections have changed. Learning changes structure at the macro scale (we have a ‘plastic brain’), which reaches down to alter micro connections and the details of electron flows at the bottom.
(Note the allusion to classical music; people love to show off in this way. They never talk about the moving saxophone solo of Lester Young or the poignancy of a Frank Sinatra song; it’s always Bach or Beethoven, isn’t it?)
If it’s not too early in the morning, you’ve already spotted the flaw in Ellis’s argument. First, he’s conflating predictability with physical determinism, and he’s also arguing, falsely, that a “change in constraint” means “libertarian free will”. Most important, he’s pretending that psychological phenomena are not physical.
I’ll cite just two more bits:
What these instances show is that psychological understandings reach down to shape the motions of ions and electrons by altering constraints at the physics-level over time. That is, mental states change the shape of proteins because the brain has real logical powers. This downward causation trumps the power of initial conditions. Logical implications determine the outcomes at the macro level in our thoughts, and at the micro level in terms of flows of electrons and ions.
. . . Genuine mental functioning and the ability to make decisions in a rational way is a far more persuasive explanation of how books get written. That this is possible is due to the extraordinary hierarchical structure of our brain and its functioning. And that functioning is enabled by downward causation from the psychological to the physical levels, with outcomes at the physics level determined by constraints that change over time. No violation of physical laws need occur.
That should be enough to show that Ellis’s argument is bogus. Why? Because it argues that the “psychological level” (“mental thoughts”) is somehow different from the “physical level”: that our experiences, our cogitation, and our interactions with others and with external events, are different from physical processes that shape our actions. After all, they are “top down” phenomena.
But they’re not. We can be influenced by internal and external events to so that our behaviors and actions would differ from how they’d be if those events were different. The example I often use is kicking a dog, though I would never do that. If a dog is friendly, but you repeatedly kick it when it approaches you, it’s not going to be nearly as friendly as it would have been had you petted it instead. Your actions have rewired the dog’s brain in such a way that it regards you as an object of fear rather than affection. It’s as simple as that. And in a similar way, our experiences reprogram our brains—our onboard meat computers—in a physical way that affects our behavior, but that we don’t yet understand. There is no psychology independent of physics that can “reach down” to affect our molecules, because, in the end, our psychology is based on molecules, even if we can’t yet (or ever) predict our future behavior with a deep knowledge of our brains.
Ellis’s Big Flaw: to claim that there is an Invisible NonPhysical Psychological Hand that “reaches down” and “changes our constraints”. Rather, there are external stimuli and inputs into our brains that alter their workings. We don’t need the palaver about “constraints” and “reaching down”, which is simply literary prestidigitation that is obscurantist.
Remember, Ellisis not talking about compatibilist free will here. He’s talking about contracausal free will. Ellis rests his case on a human psychology that is independent of physics. And that is pure dualism.
In the end, perhaps his mask slips a little, for in his ending he appears to find determinism distasteful because it absolves us of the ability to make “genuine choices” or to be “accountable for our behavior”. (I’ve argued that we are still responsible for our behavior in the sense that we are the entities who behave, but that we are not “morally responsible” because we could not have acted otherwise. But this is irrelevant to his argument.)
Ellis’s ending:
If you seriously believe that fundamental forces leave no space for free will, then it’s impossible for us to genuinely make choices as moral beings. We wouldn’t be accountable in any meaningful way for our reactions to global climate change, child trafficking or viral pandemics. The underlying physics would in reality be governing our behaviour, and responsibility wouldn’t enter into the picture.
Well, he’s wrong about “responsibility” if you conceive of it as I do—in a way that still allows for and, indeed, asks for approbation and disapprobation, punishment and reward. After all, those are external stimuli that can alter our brains.
But one gets the sense here that Ellis’s misguided screed in favor of free will is motivated in part by his distaste for determinism and his need for all of us to be “responsible”. But, as a scientist, Ellis should realize that wanting something to be true has no effect on whether it is true.
Sean Carroll on the shows “Westworld” and “Devs”: free will, simulations, and multiverses
May 10, 2020 • 9:45 am
In the May 4 New York Times, culture reporter Reggie Ugwu interviewed Sean Carroll about the recent television series “Westworld” (HBO) and “Devs” (FX on Hulu). Sean watched both shows and gives his reactions, then discusses the premises of the shows. Since I’ve seen only two episodes of one show (“Westworld”), and none of the other, I’ll let you read Sean’s take. Instead I’ll concentrate on one of the big topics of the interview as well as a pet interest of mine: free will.
Sean is a “compatibilist”: someone who, while admitting that our behaviors are determined in the sense that that laws of physics “fix the facts”, as Alex Rosenberg claims, including the facts of our behaviors, still avers that we can sensibly speak of “making a choice”. That is, while we could not have “chosen” other than what we did, we can still talk about “making a choice” and even pretend to ourselves that we really did make a “libertarian” choice where, at a given point, we could have made several alternative decisions.
I have no objection to saying that we have “free will” in the sense that we behave as if we did, though what rankles me are two things. First, philosophers dealing with the issue tend to concentrate on the “we really have a free will” part and downplay the determinism part, which to my mind is the part that has real ramifications for human behavior. Second, I think they do this (Dan Dennett has said so explicitly) because they think that if people realize that they don’t have a “free” choice and could not have made other choices, society will fall apart, with all of us, feeling like automatons or puppets, becoming nihilists unable to rise from our beds. That, of course, is false. I’m a “hard determinist” and get out of bed every day, and I realize that my “agency” is illusory even though I feel that it’s real. And if you’re a philosopher who argues for compatibilism in this way, it is condescending, for it tries to buttress most peoples’ feelings that they have libertarian free will. It’s exactly like those cynical theologians who don’t really believe in God, but think that it’s good for people to do so, as it keeps them on the straight and narrow. It’s odd that Dan Dennett, who’s demolished the theological argument as “belief in belief”, does nearly the same thing with free will.
There are a few more issues that compatibilists like to bring up.
Nobody really accepts libertarian I-could-have-chosen-otherwise free will. That may be true among non-theological philosophers, but not among the public, with 60-85% of people surveyed in four countries saying that we live in a universe that has libertarian free will. And ask a religionist, one who thinks that one has a free choice to accept Jesus, God, or Allah, if they are really determinists at bottom. With the exceptions of Calvinists and a few other sects, they’ll say “Hell, no!” (Well, they’d probably leave out the “hell.”)
Even if our behaviors are determined, it wouldn’t make a difference to society if we espouse some form of free will. This is palpably untrue. Although some philosophers as well as some readers here say, “there are no consequences of determinism”, I think that’s cant and, indeed, somewhat disingenuous. Our whole legal system has a retributivist bent that comes from punishing people because we think they could have made a better choice than they did. Likewise, poor people are often held responsible for their own circumstances (viz. Reagan’s “welfare queens”), so that people ultimately get what they deserve. This is called the “Just World” view of life.
And if you don’t believe that, look at the Sarkissian et al. study of those four countries: 60-75% of people surveyed thought that if they lived in a deterministic universe and could not have chosen other than they did, then people would not be morally responsible for their actions. This, I believe, is the reason why some philosophers like Dan Dennett, though avowing otherwise (but contradicting himself in other places), espouse compatibilism: if you tell people they have free will, they consider themselves morally responsible for their actions. And, said Dennett, if they don’t, then society will fall apart.
My response to that is that we can have more justifiable and more ethically based systems of reward and punishment if we don’t accept that people could have done other than what they did. They can be held responsible for their actions, and rewarded or punished (the latter on the basis of deterring them, sequestering them from the public, or reforming them), but not morally responsible for their actionsFor “moral responsibility”, as with the people surveyed above, implies libertarian free will and can justify retributive punishment. (That said, I don’t object to the use of “moral” to characterize” what comports with human ethics”. But I dislike the term “morally responsible”, which smacks of free choice.
Compatibilists have not settled on a definition of “free will”. To one compatibilist, free will means freedom from obvious coercion. To another, it’s that we have complex processing in our brain that spits out a decision that’s gone through an involved (and evolved) program. To a third, it’s somebody who’s sane enough to understand the consequences of their actions. There are as many definitions of “free will” as there are compatibilist philosophers. So when you say “we have free will”, you better be damn sure that you add exactly what you mean, explain why your compatibilistic “free will” is not only different from others, but is better than others.
Enough. In his interview, Sean not only admits that he’s a determinist (and a compatibilist, which he lays out in his book The Big Picture), but comes surprisingly close to saying that determinism should affect our view of human behavior. I’ll quote a few of his answers (indented) and make a few comments:
First, Sean’s definition of determinism:
A common thread between the two shows is the conflict between free will vs. determinism. Can you explain what determinism is?
Determinism is basically the idea that if you knew everything that was happening in the universe at one moment, then you would know, in principle, everything that was going to happen in the future, and everything that did happen in the past, with perfect accuracy. Pierre-Simone Laplace pointed this out in the 1800s using a thought experiment called Laplace’s Demon.
Well, that’s his view of determinism, but I would rather use the world “naturalism”. For, if quantum mechanics be true, there are things that we couldn’t predict even if we had perfect knowledge of the Universe—like when a given radioactive atom will decay. And this means that even if we knew everything happening in the universe at one time, predictions might be inaccurate. I myself have argued that perhaps even evolution is unpredictable with perfect knowledge if mutation involves quantum processes. If that’s the case—and we don’t know—then the fuel for evolutionary change is unpredictable, and hence so is evolution.
Below is Sean’s compatibilism. Most readers here probably agree with it, and I don’t disagree unless one emphasizes the free will part and not the determinism part. All the following emphases in bold are mine, in which Sean makes it clear that we could not have done other than what we did.
Let’s talk about free will. Do we have it?
It’s complicated, and I apologize for that, but it’s worth getting right. The very first question we have to ask is: Are we human beings 100 percent governed by the laws of physics? Or do we, as conscious creatures, have some wiggle room that allows us to act in ways that are outside of the laws of physics? Almost all scientists will tell you that of course it’s the former. If you jump out of a window, the laws of physics say that you are going to hit the ground. You can use all of the free will you want, but it’s not going to stop you from hitting the ground. So why would you think that it works any differently when you go to decide what shirt you’re going to wear in the morning? It’s the same laws of physics. It’s just that one case is a more crude prediction and the other case is a more detailed prediction.
Good, Dr. Carroll! We obey the laws of physics when we “choose” a shirt to wear.
Did I make the choice to pick up the phone and call you?
Short answer: yes. Long answer: It depends on what you mean by “you” and “make the choice.” At one level, you’re a collection of atoms obeying the laws of physics. No choices are involved there. But at another level, you are a person who pretty obviously makes choices. The two levels are compatible, but speak very different languages. This is the “compatibilist” stance toward free will, which is held by a healthy majority of professional philosophers.
I think this is a bit confusing given that most people think that the words “you are a person who pretty obviously makes choices” means “FREE” choices. Now Sean takes care to make the distinction between determinism and the illusion of free choice, but I couch the distinction in a way very different from Sean, emphasizing the determinism. The rest is semantics, often (not with Sean!) constructed to fool people into behaving morally.
Here Ugwu asks a good question, and Sean answers with the traditional form of compatibilism.
But isn’t that a rhetorical sleight of hand? If our choices are fully predetermined by physical processes outside of our control and beneath our consciousness, are they really choices? Or is that just a story we’re telling ourselves?
I think it’s the same as the chair you’re sitting on. Is it an illusion because it’s really just a bunch of atoms? Or is it really a chair? It’s both. You can talk about it as a set of atoms, but there’s nothing wrong with talking about it as a chair. In fact, you would be dopey to not talk about it as a chair, to insist that the only way to talk about it was as a set of atoms. That’s how nature is. It can be described using multiple different vocabularies at multiple different levels of precision.
At the level of precision where we’re talking about human beings and tables and chairs, you just can’t talk that way without talking about people making choices. There’s just no way to do it. You can hypothesize, “What if I had infinite powers and I knew where all the atoms were and I knew all the laws of physics.” Fine. But that’s not reality. If you’re reality based, then you have to talk about choices.
In his response below, Sean comes about as close as he ever has to saying that there are tangible social effects of accepting determinism (again, the emphasis in the third paragraph is mine).
On both shows, the laws of physics are used to reframe the idea of morality. On “Devs,” Forest makes the argument that determinism is “absolution.” And there’s an idea in “Westworld” that humans are just “passengers”; forces beyond our control are behind the wheel. When you see people on the news, or even when you think about the people in your own life, does your belief in determinism affect the way you judge their behavior?
Not really, no. As long as you’re talking about a human-scale world. This idea that we are just puppets is clearly a mistake. It’s mixing up two different ways of talking about the world. There’s a way of talking about human beings going through their lives and making choices. There’s another way of talking about the laws of physics being deterministic and so forth. Those are two different ways — pick one.
[Carroll] Now, there are situations where we might learn that the choices that we thought people had are more circumscribed than we knew, either because of their biology or because of mental health issues, or what have you. By all means, take that into consideration. But that’s very human-scale stuff. If a person could not have acted otherwise, then you don’t hold them responsible in the same way. It’s not a matter of cutting edge science, it’s ancient law.
Well, ancient law isn’t that clear cut! For law, ancient and modern, is largely based on the premise that in some cases people could have acted otherwise. But they couldn’t—not ever! So we shouldn’t hold people responsible as if they could have acted otherwise. And that has enormous ramifications for the legal system. We already have “not guilty by reason of insanity”, but we should have “guilty of doing an act, but punished in light of the knowledge that they couldn’t have acted other than what they did.” I’ve always thought that the court should determine responsibility, but another agency should determine “punishment”, and in light of determinism.
I wish that Sean would discuss the ramification of that “human-scale stuff”, because that’s what’s important to society. Sean clearly implies that the law under determinism would be different from the law under libertarianism (or perhaps even compatibilism). And in that he does differ from people like Dennett and many of the readers here. I’d love to have a discussion about all this with Sean some day.
A good critique of panpsychism but a lousy alternative
January 14, 2020 • 11:00 am
The article at hand was published by the Institute of Art and Ideas, a British organization that I hadn’t heard of but is described by Wikipedia thusly:
The Institute of Art and Ideas is an arts organisation founded in 2008 in London. Its programming includes the world’s largest philosophy and music festival, HowTheLightGetsIn and the online channel IAI TV, where talks, debates and articles by leading thinkers can be accessed for free, under the slogan “Changing How The World Thinks.”
I then remembered that they invited me to that festival a few years back, but expected me to pay my own way, which I won’t do just to help them fund their endeavors. But I will point to this article on their website by Bernardo Kastrup, identified as a “Dutch computer scientist and philosopher who has published fundamental theoretical reflections on the mind matter problem.” I have to say that if you go to his site, which is the link at his name, you will find a considerable amount of hubris! But amuse yourself later.
Kastrup is quite critical of panpsychism, and for good reasons. But then, near the end of his piece, the whole argument goes south. For Kastrup, while saying that panpsychism can’t help us understand the “hard problem of consciousness”, also claims that materialism can’t solve it either, and we need to posit that the entire universe (or, as Sean Carroll would say, the Big Wave Function) is conscious. And, like panpsychism, that’s crazy and untestable. It’s weird that a philosopher can so deftly dispose of a crazy theory but then fall under the spell of a different crazy theory. But read by clicking on the screenshot:
Kastrup’s beef is with the “combination problem” that I’ve highlighted before: how does the semi-consciousness of elementary particles (people like Philip Goff posit that the spin, charge, mass, and other properties of these particle are aspects of their “consciousness”—a semantic trick) combine to provide the “high level” consciousness of animals such as ourself? So far I haven’t seen a solution to this problem from panpsychists, only a bunch of handwaving.
Kastrup highlights the combination problem in a more physical way, involving the recognition that particles are not discrete, but aspects of the Big Wave function:
The only way around this issue, says Kastrup, is to posit that what is really conscious is the field that creates the particle itself. He explains why the panpsychist can’t coherently argue why experiential states belong to particles themselves, but then his argument begins to fall apart. Why? Because Kastrup says that even if the quantitative aspects of particles could combine to produce consciousness, they would produce consciousness only as a quantitative property, but consciousness is a qualitative property—the problem of quality:
Dr. Kastrup doesn’t seem to realize that some day I think we’ll be able to stimulate blind people’s brains in the right way and then they will see red! We can already give them a very rudimentary experience of vision. Why is he so sure that the qualia of “red” is beyond scientific understanding?
As I wrote five days ago, on similar bases Patricia Churchland has pretty much knocked down the idea that we can’t understand the origin and mechanism of subjective sensations through a materialist paradigm. I refer you again to her excellent 2005 paper in Progress in Brain Research, “A neurophilosophical slant on consciousness research”, available free at the link. Churchland thinks, and makes a persuasive case, that just because “qualia” (sensations) are “subjective”, that doesn’t put them beyond the reach of materialist explanation. The whole “consciousness is subjective and thus can’t be understood by a materialistic approach” argument is, it seems, a red herring.
Kastrup deep-sixes the panpsychism explanation, at least in terms of the constituents of the brain having some form of consciousness, but comes a cropper (I love that phrase!) when he tries to replace panpsychism with his own theory. For that theory is simply this: the entire universe—the “quantum field”—is conscious. This, he thinks, avoids the “combination problem.” He doesn’t seem to realize that it raises another problem: testability. Also, he looks a bit foolish when he criticizes materialism, which is the only way we have ever been able to understand the universe. Here’s what he says (I’ve put his definition of a “reduction base” at the bottom):
So now we don’t have the combination problem nor the untenable idea that each particle has some unique consciousness or apprehension of the universe. All we need posit is that the entire universe is conscious. But that nagging little problem remains: “In what sense is it conscious? Oh, and there’s another issue: “How do we test your theory, Dr. Kastrup?” For a theory that can be neither tested nor falsified is a theory that can be ignored, for it’s not a scientific or empirical explanation.
Now in the passage above Kastrup links to a big book he wrote, and I’m sure he’d point me to that to show why the Universe’s wave function is conscious. But I’m not reading it—not yet. For all I anticipate there is just another species of gobbledygook, or, as Churchland calls it, “hornswoggling.” If you want to read it, by all means do so and report back here.
I wonder why so many people these days are dissatisfied with materialism and science and are drawn to metaphysics, e.g., Tom Nagel, Tom Wolfe, Philip Goff and now Kastrup. You tell me! One thing I know: Kastrup is in good company. These are from his website:
How Kastrup defines the “reduction base” of theory (I’d call it the “turtle at the bottom”):
h/t: Paul |
6c9d36fd950066bd | Time in physics
Foucault's pendulum in the Panthéon of Paris can measure time as well as demonstrate the rotation of Earth.
Time in physics is defined by its measurement: time is what a clock reads.[1] In classical, non-relativistic physics, it is a scalar quantity (often denoted by the symbol [2]) and, like length, mass, and charge, is usually described as a fundamental quantity. Time can be combined mathematically with other physical quantities to derive other concepts such as motion, kinetic energy and time-dependent fields. Timekeeping is a complex of technological and scientific issues, and part of the foundation of recordkeeping.
Markers of time
Before there were clocks, time was measured by those physical processes[3] which were understandable to each epoch of civilization:[4]
Eventually,[10][11] it became possible to characterize the passage of time with instrumentation, using operational definitions. Simultaneously, our conception of time has evolved, as shown below.[12]
The unit of measurement of time: the second
In the International System of Units (SI), the unit of time is the second (symbol: ). It is a SI base unit, and has been defined since 1967 as "the duration of 9,192,631,770 [cycles] of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom".[13] This definition is based on the operation of a caesium atomic clock.These clocks became practical for use as primary reference standards after about 1955, and have been in use ever since.
The state of the art in timekeeping
The UTC timestamp in use worldwide is an atomic time standard. The relative accuracy of such a time standard is currently on the order of 10−15[14] (corresponding to 1 second in approximately 30 million years). The smallest time step considered theoretically observable is called the Planck time, which is approximately 5.391×10−44 seconds - many orders of magnitude below the resolution of current time standards.
The caesium atomic clock became practical after 1950, when advances in electronics enabled reliable measurement of the microwave frequencies it generates. As further advances occurred, atomic clock research has progressed to ever-higher frequencies, which can provide higher accuracy and higher precision. Clocks based on these techniques have been developed, but are not yet in use as primary reference standards.
Conceptions of time
Andromeda galaxy (M31) is two million light-years away. Thus we are viewing M31's light from two million years ago,[15] a time before humans existed on Earth.
Galileo, Newton, and most people up until the 20th century thought that time was the same for everyone everywhere. This is the basis for timelines, where time is a parameter. The modern understanding of time is based on Einstein's theory of relativity, in which rates of time run differently depending on relative motion, and space and time are merged into spacetime, where we live on a world line rather than a timeline. In this view time is a coordinate. According to the prevailing cosmological model of the Big Bang theory, time itself began as part of the entire Universe about 13.8 billion years ago.
Regularities in nature
In order to measure time, one can record the number of occurrences (events) of some periodic phenomenon. The regular recurrences of the seasons, the motions of the sun, moon and stars were noted and tabulated for millennia, before the laws of physics were formulated. The sun was the arbiter of the flow of time, but time was known only to the hour for millennia, hence, the use of the gnomon was known across most of the world, especially Eurasia, and at least as far southward as the jungles of Southeast Asia.[16]
At first, timekeeping was done by hand by priests, and then for commerce, with watchmen to note time as part of their duties. The tabulation of the equinoxes, the sandglass, and the water clock became more and more accurate, and finally reliable. For ships at sea, boys were used to turn the sandglasses and to call the hours.
Mechanical clocks
By the time of Richard of Wallingford, the use of ratchets and gears allowed the towns of Europe to create mechanisms to display the time on their respective town clocks; by the time of the scientific revolution, the clocks became miniaturized enough for families to share a personal clock, or perhaps a pocket watch. At first, only kings could afford them. Pendulum clocks were widely used in the 18th and 19th century. They have largely been replaced in general use by quartz and digital clocks. Atomic clocks can theoretically keep accurate time for millions of years. They are appropriate for standards and scientific use.
Galileo: the flow of time
Galileo's experimental setup to measure the literal flow of time, in order to describe the motion of a ball, preceded Isaac Newton's statement in his Principia:
I do not define time, space, place and motion, as being well known to all.[21]
The Galilean transformations assume that time is the same for all reference frames.
Newton's physics: linear time
In or around 1665, when Isaac Newton (1643–1727) derived the motion of objects falling under gravity, the first clear formulation for mathematical physics of a treatment of time began: linear time, conceived as a universal clock.
The water clock mechanism described by Galileo was engineered to provide laminar flow of the water during the experiments, thus providing a constant flow of water for the durations of the experiments, and embodying what Newton called duration.
In this section, the relationships listed below treat time as a parameter which serves as an index to the behavior of the physical system under consideration. Because Newton's fluents treat a linear flow of time (what he called mathematical time), time could be considered to be a linearly varying parameter, an abstraction of the march of the hours on the face of a clock. Calendars and ship's logs could then be mapped to the march of the hours, days, months, years and centuries.
Thermodynamics and the paradox of irreversibility
In 1824 Sadi Carnot (1796–1832) scientifically analyzed the steam engine with his Carnot cycle, an abstract engine. Rudolf Clausius (1822–1888) noted a measure of disorder, or entropy, which affects the continually decreasing amount of free energy which is available to a Carnot engine in the:
Thus the continual march of a thermodynamic system, from lesser to greater entropy, at any given temperature, defines an arrow of time. In particular, Stephen Hawking identifies three arrows of time:[23]
• Psychological arrow of time - our perception of an inexorable flow.
• Thermodynamic arrow of time - distinguished by the growth of entropy.
• Cosmological arrow of time - distinguished by the expansion of the universe.
Entropy is maximum in an isolated thermodynamic system, and increases. In contrast, Erwin Schrödinger (1887–1961) pointed out that life depends on a "negative entropy flow".[24] Ilya Prigogine (1917–2003) stated that other thermodynamic systems which, like life, are also far from equilibrium, can also exhibit stable spatio-temporal structures. Soon afterward, the Belousov–Zhabotinsky reactions[25] were reported, which demonstrate oscillating colors in a chemical solution.[26] These nonequilibrium thermodynamic branches reach a bifurcation point, which is unstable, and another thermodynamic branch becomes stable in its stead.[27]
Electromagnetism and the speed of light
In 1864, James Clerk Maxwell (1831–1879) presented a combined theory of electricity and magnetism. He combined all the laws then known relating to those two phenomenon into four equations. These vector calculus equations which use the del operator () are known as Maxwell's equations for electromagnetism.
ε0 and μ0 are the electric permittivity and the magnetic permeability of free space;
c = is the speed of light in free space, 299 792 458 m/s;
E is the electric field;
B is the magnetic field.
These equations allow for solutions in the form of electromagnetic waves. The wave is formed by an electric field and a magnetic field oscillating together, perpendicular to each other and to the direction of propagation. These waves always propagate at the speed of light c, regardless of the velocity of the electric charge that generated them.
The fact that light is predicted to always travel at speed c would be incompatible with Galilean relativity if Maxwell's equations were assumed to hold in any inertial frame (reference frame with constant velocity), because the Galilean transformations predict the speed to decrease (or increase) in the reference frame of an observer traveling parallel (or antiparallel) to the light.
The Michelson–Morley experiment failed to detect any difference in the relative speed of light due to the motion of the Earth relative to the luminiferous aether, suggesting that Maxwell's equations did, in fact, hold in all frames. In 1875, Hendrik Lorentz (1853–1928) discovered Lorentz transformations, which left Maxwell's equations unchanged, allowing Michelson and Morley's negative result to be explained. Henri Poincaré (1854–1912) noted the importance of Lorentz's transformation and popularized it. In particular, the railroad car description can be found in Science and Hypothesis,[29] which was published before Einstein's articles of 1905.
The Lorentz transformation predicted space contraction and time dilation; until 1905, the former was interpreted as a physical contraction of objects moving with respect to the aether, due to the modification of the intermolecular forces (of electric nature), while the latter was thought to be just a mathematical stipulation.[citation needed]
Einstein's physics: spacetime
But it is not possible without further assumption to compare, in respect of time, an event at A with an event at B. We have so far defined only an "A time" and a "B time."
We have not defined a common "time" for A and B, for the latter cannot be defined at all unless we establish by definition that the "time" required by light to travel from A to B equals the "time" it requires to travel from B to A. Let a ray of light start at the "A time" tA from A towards B, let it at the "B time" tB be reflected at B in the direction of A, and arrive again at A at the “A time” tA.
In accordance with definition the two clocks synchronize if
— Albert Einstein, "On the Electrodynamics of Moving Bodies"[30]
Einstein showed that if the speed of light is not changing between reference frames, space and time must be so that the moving observer will measure the same speed of light as the stationary one because velocity is defined by space and time:
where r is position and t is time.
Indeed, the Lorentz transformation (for two reference frames in relative motion, whose x axis is directed in the direction of the relative velocity)
can be said to "mix" space and time in a way similar to the way a Euclidean rotation around the z axis mixes x and y coordinates. Consequences of this include relativity of simultaneity.
More specifically, the Lorentz transformation is a hyperbolic rotation which is a change of coordinates in the four-dimensional Minkowski space, a dimension of which is ct. (In Euclidean space an ordinary rotation is the corresponding change of coordinates.) The speed of light c can be seen as just a conversion factor needed because we measure the dimensions of spacetime in different units; since the metre is currently defined in terms of the second, it has the exact value of 299 792 458 m/s. We would need a similar factor in Euclidean space if, for example, we measured width in nautical miles and depth in feet. In physics, sometimes units of measurement in which c = 1 are used to simplify equations.
Time in a "moving" reference frame is shown to run more slowly than in a "stationary" one by the following relation (which can be derived by the Lorentz transformation by putting ∆x′ = 0, ∆τ = ∆t′):
• τ is the time between two events as measured in the moving reference frame in which they occur at the same place (e.g. two ticks on a moving clock); it is called the proper time between the two events;
• t is the time between these same two events, but as measured in the stationary reference frame;
• v is the speed of the moving reference frame relative to the stationary one;
• c is the speed of light.
Moving objects therefore are said to show a slower passage of time. This is known as time dilation.
These transformations are only valid for two frames at constant relative velocity. Naively applying them to other situations gives rise to such paradoxes as the twin paradox.
That paradox can be resolved using for instance Einstein's General theory of relativity, which uses Riemannian geometry, geometry in accelerated, noninertial reference frames. Employing the metric tensor which describes Minkowski space:
Einstein developed a geometric solution to Lorentz's transformation that preserves Maxwell's equations. His field equations give an exact relationship between the measurements of space and time in a given region of spacetime and the energy density of that region.
is the gravitational time dilation of an object at a distance of .
is the change in coordinate time, or the interval of coordinate time.
is the gravitational constant
is the mass generating the field
is the change in proper time , or the interval of proper time.
Or one could use the following simpler approximation:
That is, the stronger the gravitational field (and, thus, the larger the acceleration), the more slowly time runs. The predictions of time dilation are confirmed by particle acceleration experiments and cosmic ray evidence, where moving particles decay more slowly than their less energetic counterparts. Gravitational time dilation gives rise to the phenomenon of gravitational redshift and Shapiro signal travel time delays near massive objects such as the sun. The Global Positioning System must also adjust signals to account for this effect.
According to Einstein's general theory of relativity, a freely moving particle traces a history in spacetime that maximises its proper time. This phenomenon is also referred to as the principle of maximal aging, and was described by Taylor and Wheeler as:[31]
"Principle of Extremal Aging: The path a free object takes between two events in spacetime is the path for which the time lapse between these events, recorded on the object's wristwatch, is an extremum."
Einstein's theory was motivated by the assumption that every point in the universe can be treated as a 'center', and that correspondingly, physics must act the same in all reference frames. His simple and elegant theory shows that time is relative to an inertial frame. In an inertial frame, Newton's first law holds; it has its own local geometry, and therefore its own measurements of space and time; there is no 'universal clock'. An act of synchronization must be performed between two systems, at the least.
Time in quantum mechanics
There is a time parameter in the equations of quantum mechanics. The Schrödinger equation[32] is
One solution can be
where is called the time evolution operator, and H is the Hamiltonian.
But the Schrödinger picture shown above is equivalent to the Heisenberg picture, which enjoys a similarity to the Poisson brackets of classical mechanics. The Poisson brackets are superseded by a nonzero commutator, say [H,A] for observable A, and Hamiltonian H:
This equation denotes an uncertainty relation in quantum physics. For example, with time (the observable A), the energy E (from the Hamiltonian H) gives:
is the uncertainty in energy
is the uncertainty in time
is Planck's constant
The more precisely one measures the duration of a sequence of events, the less precisely one can measure the energy associated with that sequence, and vice versa. This equation is different from the standard uncertainty principle, because time is not an operator in quantum mechanics.
Quantum mechanics explains the properties of the periodic table of the elements. Starting with Otto Stern's and Walter Gerlach's experiment with molecular beams in a magnetic field, Isidor Rabi (1898–1988), was able to modulate the magnetic resonance of the beam. In 1945 Rabi then suggested that this technique be the basis of a clock[33] using the resonant frequency of an atomic beam.
Dynamical systems
See dynamical systems and chaos theory, dissipative structures
One could say that time is a parameterization of a dynamical system that allows the geometry of the system to be manifested and operated on. It has been asserted that time is an implicit consequence of chaos (i.e. nonlinearity/irreversibility): the characteristic time, or rate of information entropy production, of a system. Mandelbrot introduces intrinsic time in his book Multifractals and 1/f noise.
Signalling is one application of the electromagnetic waves described above. In general, a signal is part of communication between parties and places. One example might be a yellow ribbon tied to a tree, or the ringing of a church bell. A signal can be part of a conversation, which involves a protocol. Another signal might be the position of the hour hand on a town clock or a railway station. An interested party might wish to view that clock, to learn the time. See: Time ball, an early form of Time signal.
Evolution of a world line of an accelerated massive particle. This world line is restricted to the timelike top and bottom sections of this spacetime figure; this world line cannot cross the top (future) or the bottom (past) light cone. The left and right sections (which are outside the light cones) are spacelike.
Along with the formulation of the equations for the electromagnetic wave, the field of telecommunication could be founded. In 19th century telegraphy, electrical circuits, some spanning continents and oceans, could transmit codes - simple dots, dashes and spaces. From this, a series of technical issues have emerged; see Category:Synchronization. But it is safe to say that our signalling systems can be only approximately synchronized, a plesiochronous condition, from which jitter need be eliminated.
That said, systems can be synchronized (at an engineering approximation), using technologies like GPS. The GPS satellites must account for the effects of gravitation and other relativistic factors in their circuitry. See: Self-clocking signal.
Technology for timekeeping standards
The primary time standard in the U.S. is currently NIST-F1, a laser-cooled Cs fountain,[34] the latest in a series of time and frequency standards, from the ammonia-based atomic clock (1949) to the caesium-based NBS-1 (1952) to NIST-7 (1993). The respective clock uncertainty declined from 10,000 nanoseconds per day to 0.5 nanoseconds per day in 5 decades.[35] In 2001 the clock uncertainty for NIST-F1 was 0.1 nanoseconds/day. Development of increasingly accurate frequency standards is underway.
In this time and frequency standard, a population of caesium atoms is laser-cooled to temperatures of one microkelvin. The atoms collect in a ball shaped by six lasers, two for each spatial dimension, vertical (up/down), horizontal (left/right), and back/forth. The vertical lasers push the caesium ball through a microwave cavity. As the ball is cooled, the caesium population cools to its ground state and emits light at its natural frequency, stated in the definition of second above. Eleven physical effects are accounted for in the emissions from the caesium population, which are then controlled for in the NIST-F1 clock. These results are reported to BIPM.
Additionally, a reference hydrogen maser is also reported to BIPM as a frequency standard for TAI (international atomic time).
The measurement of time is overseen by BIPM (Bureau International des Poids et Mesures), located in Sèvres, France, which ensures uniformity of measurements and their traceability to the International System of Units (SI) worldwide. BIPM operates under authority of the Metre Convention, a diplomatic treaty between fifty-one nations, the Member States of the Convention, through a series of Consultative Committees, whose members are the respective national metrology laboratories.
Time in cosmology
The equations of general relativity predict a non-static universe. However, Einstein accepted only a static universe, and modified the Einstein field equation to reflect this by adding the cosmological constant, which he later described as the biggest mistake of his life. But in 1927, Georges Lemaître (1894–1966) argued, on the basis of general relativity, that the universe originated in a primordial explosion. At the fifth Solvay conference, that year, Einstein brushed him off with "Vos calculs sont corrects, mais votre physique est abominable."[36] (“Your math is correct, but your physics is abominable”). In 1929, Edwin Hubble (1889–1953) announced his discovery of the expanding universe. The current generally accepted cosmological model, the Lambda-CDM model, has a positive cosmological constant and thus not only an expanding universe but an accelerating expanding universe.
If the universe were expanding, then it must have been much smaller and therefore hotter and denser in the past. George Gamow (1904–1968) hypothesized that the abundance of the elements in the Periodic Table of the Elements, might be accounted for by nuclear reactions in a hot dense universe. He was disputed by Fred Hoyle (1915–2001), who invented the term 'Big Bang' to disparage it. Fermi and others noted that this process would have stopped after only the light elements were created, and thus did not account for the abundance of heavier elements.
Gamow's prediction was a 5–10-kelvin black-body radiation temperature for the universe, after it cooled during the expansion. This was corroborated by Penzias and Wilson in 1965. Subsequent experiments arrived at a 2.7 kelvins temperature, corresponding to an age of the universe of 13.8 billion years after the Big Bang.
General relativity gave us our modern notion of the expanding universe that started in the Big Bang. Using relativity and quantum theory we have been able to roughly reconstruct the history of the universe. In our epoch, during which electromagnetic waves can propagate without being disturbed by conductors or charges, we can see the stars, at great distances from us, in the night sky. (Before this epoch, there was a time, before the universe cooled enough for electrons and nuclei to combine into atoms about 377,000 years after the Big Bang, during which starlight would not have been visible over large distances.)
See also
1. ^ Considine, Douglas M.; Considine, Glenn D. (1985). Process instruments and controls handbook (3 ed.). McGraw-Hill. pp. 18–61. ISBN 0-07-012436-1.
3. ^ For example, Galileo measured the period of a simple harmonic oscillator with his pulse.
4. ^ a b Otto Neugebauer The Exact Sciences in Antiquity. Princeton: Princeton University Press, 1952; 2nd edition, Brown University Press, 1957; reprint, New York: Dover publications, 1969. Page 82.
5. ^ See, for example William Shakespeare Hamlet: " ... to thine own self be true, And it must follow, as the night the day, Thou canst not then be false to any man."
6. ^ "Heliacal/Dawn Risings". Solar-center.stanford.edu. Retrieved 2012-08-17.
7. ^ Farmers have used the sun to mark time for thousands of years, as the most ancient method of telling time. Archived 2010-07-26 at the Wayback Machine
8. ^ Eratosthenes, On the measure of the Earth calculated the circumference of Earth, based on the measurement of the length of the shadow cast by a gnomon in two different places in Egypt, with an error of -2.4% to +0.8%
9. ^ Fred Hoyle (1962), Astronomy: A history of man's investigation of the universe, Crescent Books, Inc., London LC 62-14108, p.31
10. ^ The Mesopotamian (modern-day Iraq) astronomers recorded astronomical observations with the naked eye, more than 3500 years ago. P. W. Bridgman defined his operational definition in the twentieth c.
11. ^ Naked eye astronomy became obsolete in 1609 with Galileo's observations with a telescope. Galileo Galilei Linceo, Sidereus Nuncius (Starry Messenger) 1610.
12. ^ http://tycho.usno.navy.mil/gpstt.html http://www.phys.lsu.edu/mog/mog9/node9.html Today, automated astronomical observations from satellites and spacecraft require relativistic corrections of the reported positions.
13. ^ "Unit of time (second)". SI brochure. International Bureau of Weights and Measures (BIPM). pp. Section Retrieved 2008-06-08.
14. ^ S. R. Jefferts et al., "Accuracy evaluation of NIST-F1".
15. ^ Fred Adams and Greg Laughlin (1999), Five Ages of the Universe ISBN 0-684-86576-9 p.35.
16. ^ Charles Hose and William McDougall (1912) The Pagan Tribes of Borneo, Plate 60. Kenyahs measuring the Length of the Shadow at Noon to determine the Time for sowing PADI p. 108. This photograph is reproduced as plate B in Fred Hoyle (1962), Astronomy: A history of man's investigation of the universe, Crescent Books, Inc., London LC 62-14108, p.31. The measurement process is explained by: Gene Ammarell (1997), "Astronomy in the Indo-Malay Archipelago", p.119, Encyclopaedia of the history of science, technology, and medicine in non-western cultures, Helaine Selin, ed., which describes Kenyah Tribesmen of Borneo measuring the shadow cast by a gnomon, or tukar do with a measuring scale, or aso do.
18. ^ Watson, E (1979) "The St Albans Clock of Richard of Wallingford". Antiquarian Horology 372-384.
19. ^ Jo Ellen Barnett, Time's Pendulum ISBN 0-306-45787-3 p.99.
20. ^ Galileo 1638 Discorsi e dimostrazioni matematiche, intorno á due nuoue scienze 213, Leida, Appresso gli Elsevirii (Louis Elsevier), or Mathematical discourses and demonstrations, relating to Two New Sciences, English translation by Henry Crew and Alfonso de Salvio 1914. Section 213 is reprinted on pages 534-535 of On the Shoulders of Giants:The Great Works of Physics and Astronomy (works by Copernicus, Kepler, Galileo, Newton, and Einstein). Stephen Hawking, ed. 2002 ISBN 0-7624-1348-4
21. ^ Newton 1687 Philosophiae Naturalis Principia Mathematica, Londini, Jussu Societatis Regiae ac Typis J. Streater, or The Mathematical Principles of Natural Philosophy, London, English translation by Andrew Motte 1700s. From part of the Scholium, reprinted on page 737 of On the Shoulders of Giants:The Great Works of Physics and Astronomy (works by Copernicus, Kepler, Galileo, Newton, and Einstein). Stephen Hawking, ed. 2002 ISBN 0-7624-1348-4
22. ^ Newton 1687 page 738.
23. ^ pp. 182–195. Stephen Hawking 1996. The Illustrated Brief History of Time: updated and expanded edition ISBN 0-553-10374-1
24. ^ Erwin Schrödinger (1945) What is Life?
25. ^ G. Nicolis and I. Prigogine (1989), Exploring Complexity
26. ^ R. Kapral and K. Showalter, eds. (1995), Chemical Waves and Patterns
27. ^ Ilya Prigogine (1996) The End of Certainty pp. 63–71
28. ^ Clemmow, P. C. (1973). An introduction to electromagnetic theory. CUP Archive. pp. 56–57. ISBN 0-521-09815-7., Extract of pages 56, 57
29. ^ Henri Poincaré, (1902). Science and Hypothesis Eprint Archived 2006-10-04 at the Wayback Machine
30. ^ Einstein 1905, Zur Elektrodynamik bewegter Körper [On the electrodynamics of moving bodies] reprinted 1922 in Das Relativitätsprinzip, B.G. Teubner, Leipzig. The Principles of Relativity: A Collection of Original Papers on the Special Theory of Relativity, by H.A. Lorentz, A. Einstein, H. Minkowski, and W. H. Weyl, is part of Fortschritte der mathematischen Wissenschaften in Monographien, Heft 2. The English translation is by W. Perrett and G.B. Jeffrey, reprinted on page 1169 of On the Shoulders of Giants:The Great Works of Physics and Astronomy (works by Copernicus, Kepler, Galileo, Newton, and Einstein). Stephen Hawking, ed. 2002 ISBN 0-7624-1348-4
31. ^ Taylor (2000). "Exploring Black Holes: Introduction to General Relativity" (PDF). Addison Wesley Longman.
32. ^ Schrödinger, E. (1 November 1926). "An Undulatory Theory of the Mechanics of Atoms and Molecules". Physical Review. American Physical Society (APS). 28 (6): 1049–1070. Bibcode:1926PhRv...28.1049S. doi:10.1103/physrev.28.1049. ISSN 0031-899X.
33. ^ A Brief History of Atomic Clocks at NIST Archived 2009-02-14 at the Wayback Machine
34. ^ D. M. Meekhof, S. R. Jefferts, M. Stepanovíc, and T. E. Parker (2001) "Accuracy Evaluation of a Cesium Fountain Primary Frequency Standard at NIST", IEEE Transactions on Instrumentation and Measurement. 50, no. 2, (April 2001) pp. 507-509
35. ^ James Jespersen and Jane Fitz-Randolph (1999). From sundials to atomic clocks : understanding time and frequency. Washington, D.C. : U.S. Dept. of Commerce, Technology Administration, National Institute of Standards and Technology. 308 p. : ill. ; 28 cm. ISBN 0-16-050010-9
36. ^ John C. Mather and John Boslough (1996), The Very First Light ISBN 0-465-01575-1 p. 41.
37. ^ George Smoot and Keay Davidson (1993) Wrinkles in Time ISBN 0-688-12330-9 A memoir of the experiment program for detecting the predicted fluctuations in the cosmic microwave background radiation.
38. ^ Martin Rees (1997), Before the Beginning ISBN 0-201-15142-1 p. 210.
39. ^ Prigogine, Ilya (1996), The End of Certainty: Time, Chaos and the New Laws of Nature. ISBN 0-684-83705-6 On pages 163 and 182.
Further reading
• Boorstein, Daniel J., The Discoverers. Vintage. February 12, 1985. ISBN 0-394-72625-1
• Dieter Zeh, H., The physical basis of the direction of time. Springer. ISBN 978-3-540-42081-1
• Kuhn, Thomas S., The Structure of Scientific Revolutions. ISBN 0-226-45808-3
• Mandelbrot, Benoît, Multifractals and 1/f noise. Springer Verlag. February 1999. ISBN 0-387-98539-5
• Prigogine, Ilya (1984), Order out of Chaos. ISBN 0-394-54204-5
• Serres, Michel, et al., "Conversations on Science, Culture, and Time (Studies in Literature and Science)". March, 1995. ISBN 0-472-06548-3
• Stengers, Isabelle, and Ilya Prigogine, Theory Out of Bounds. University of Minnesota Press. November 1997. ISBN 0-8166-2517-4
External links
This page was last updated at 2020-12-21 00:00, update this pageView original page
|
a9f1d8ae2822188c | The growth of structure in the intergalactic medium
Sabino Matarrese and Roya Mohayaee
Dipartimento di Fisica ‘Galileo Galilei’, Università di Padova, via Marzolo 8, I-35131 Padova, Italy
Dipartimento di Fisica, Università di Roma ‘La Sapienza’, P.le Aldo Moro 5, 00185, Roma, Italy
A stochastic adhesion model is introduced, with the purpose of describing the formation and evolution of mildly nonlinear structures, such as sheets and filaments, in the intergalactic medium (IGM), after hydrogen reionization. The model is based on replacing the overall force acting on the baryon fluid – as it results from the composition of local gravity, pressure gradients and Hubble drag – by a mock external force, self-consistently calculated from first-order perturbation theory. A small kinematic viscosity term prevents shell-crossing on small scales (which arises because of the approximate treatment of pressure gradients). The emerging scheme is an extension of the well-known adhesion approximation for the dark matter dynamics, from which it only differs by the presence of a small-scale ‘random’ force, characterizing the IGM. Our algorithm is the ideal tool to obtain the skeleton of the IGM distribution, which is responsible for the structure observed in the low-column density Ly forest in the absorption spectra of distant quasars.
Cosmology: theory – intergalactic medium – large-scale structure of universe
1 Introduction
The analysis of the spectra of distant QSOs blueward of the Ly emission, the so-called Ly forest, has revealed the presence of large coherent structures in the cosmic distribution of the neutral hydrogen, left over by the reionization process (e.g. Rauch 1998 and references therein). The low-column density ( cm) Ly forest is thought as being generated by unshocked gas in voids and mildly overdense fluctuations of the photoionized intergalactic medium (IGM). The IGM is, in turn, expected to trace the underlying dark matter (DM) distribution on large scales, while the thermal pressure in the gas smears small-scale DM nonlinearities. According to this picture, the large coherent spatial structures in the IGM, which are probed by quasar absorption spectra, reflect the underlying network of filaments and sheets in the large-scale DM distribution (e.g. Efstathiou, Schaye & Theuns 2000), the so-called ‘cosmic web’ (Bond, Kofman & Pogosyan 1996). Thanks to this simple picture, a number of semi-analytical, or pseudo-hydrodynamical particle-mesh codes have been developed, which all aim at simulating the IGM distribution, with enough resolution to predict the main features of the Ly forest (e.g. Bi & Davidsen 1997; Croft et al. 1998; Gnedin & Hui 1996, 1998; Hui, Gnedin & Zhang 1997; Petitjean, Mücket & Kates 1995; Gaztañaga & Croft 1999; Meiksin & White 2000; Viel et al. 2001). This physical interpretation of the Ly forest is, moreover, largely corroborated by numerical simulations based on fully hydrodynamical codes (e.g. Cen et al. 1994; Zhang, Anninos & Norman 1995, 1997; Miralda-Escudé et al. 1996; Hernquist et al. 1996; Theuns et al. 1998).
Although the above scenario captures the main physical processes which determine the coarse-grained dynamics of the baryons, it neglects a number of fine-grained details which distinguish the baryon distribution from the underlying DM one. There has been, recently, growing observational and theoretical interest on the low-column density Ly forest, as an important indicator of the state of the Universe in the redshift range . We then believe it has become necessary to develop new and more sophisticated techniques, enabling to understand the IGM dynamics and accurately simulate its clustering, without resorting to the full machinery of hydro-simulations. The model presented here aims at providing an accurate scheme for the growth and evolution of mildly nonlinear structures in the baryon gas, accounting for the detailed thermal history of the IGM.
The stochastic adhesion model introduced here is based on applying the forced Burgers equation of nonlinear diffusion (Forster, Nelson & Stephen 1977; Kardar, Parisi & Zhang 1986; Barabási & Stanley 1995; E et al. 1997; Frisch & Bec 2000 and references therein) to the cosmological framework, where it can be given the general form
Here are comoving coordinates, the time variable is chosen to coincide with the cosmic scale-factor (although, for the pure DM evolution, a better choice would be the linear growth factor of density fluctuations), is the (suitably rescaled) peculiar velocity field, which is here assumed to be irrotational, , and denotes the convective time-derivative. The coefficient of kinematical viscosity, , is always assumed to be small. In standard applications of the forced Burgers equation, is a random external potential. For a general cosmological fluid, such as the dark matter or the baryons, , where is the local gravitational potential (up to an appropriate rescaling, whose precise form will be given in Section 2), which must be consistently determined via the Local Poisson equation, and is the specific enthalpy of the fluid (up to the same rescaling), which vanishes in the DM case. Therefore, our potential in the RHS of eq. (1) is far from being ‘external’, as it is, for instance, dynamically related to the velocity field which appears on the LHS of the same equation.
The main idea of this paper is that, if is given an approximate expression, e.g. by using the results of perturbation theory, it can be legitimately treated as a truly external random potential. The choice of variables in eq. (1) is such that, if this approximation technique is applied to the DM evolution, to first order it reduces to the well-known adhesion model (Gurbatov, Saichev & Shandarin, 1985, 1989; Kofman & Shandarin 1988), which was introduced in cosmology to extend the validity of the Zel’dovich approximation (Zel’dovich 1970) beyond the epoch of first caustic formation. In the DM case, therefore, only a second-order calculation would produce a non-vanishing external force, whose presence would then affect rather small scales. Quite different is the case of the collisional baryon component, where, already at first order in perturbation theory (both Eulerian and Lagrangian) a non-zero contribution to generally comes out: it is originated from the unbalanced composition of Hubble drag, local gravity and gas pressure, thus representing a genuine baryonic feature.
The presence of the kinematical viscosity term requires some explanation. Its role in the present context is twofold. First, it prevents the formation of multi-streams, which are well-known to affect the DM dynamics, but may also appear as a spurious effect in the collisional case, when pressure gradients are given an approximate form in terms of linear theory. Second, it allows to transform the problem into a linear one, through the so-called Hopf-Cole substitution (e.g. Burgers 1974) , where the ‘expotential’ obeys the random heat, linear diffusion, equation
whose solution is expressible in terms of path-integrals (e.g. Feynman & Hibbs 1965). As we will see in Sections 5 and 6, a suitable approximation technique, valid in the small case, allows to give the velocity field a simple and finite form, more convenient for practical applications.
The forced Burgers equation has been used to describe a variety of different physical problems, ranging from interfacial growth in condensed matter physics, where it is known as the KPZ model (Kardar, Parisi & Zhang 1986), to fully developed turbulence (e.g. Bouchaud, Mézard & Parisi 1995). Later in the paper we will come back to this interesting connection and discuss both the analogies and the peculiarities of the cosmological application of this equation.
The stochastic adhesion approximation provides an analytical description of the generation, and subsequent merging of shocks, which give rise to the thin network of filaments and sheets in the IGM spatial distribution. Their existence is clearly observed in the spectra of high-redshift QSOs, e.g. through the presence of common absorption lines in the Ly forest of multiple QSOs, with lines of sight separated by several comoving Mpc at redshifts (e.g. Rauch 1998 and references therein). An important property of our model is that it allows to draw the skeleton of the IGM distribution through a straightforward extension of the geometrical technique applied in the free adhesion model (e.g. Sahni & Coles 1995 and references therein). The present paper will be mostly devoted to provide the physical and mathematical bases for our stochastic adhesion model. Simulations of the IGM large-scale structure will be obtained only from the simplified inviscid () model. In a subsequent paper we will implement our algorithm to produce numerical simulations of the IGM distribution and to study the statistical properties of the IGM density and velocity fields. Let us stress that approximation schemes like the present one can be particularly useful, as they allow to better account for the cosmic variance of large-scale modes, which is poorly probed (especially at low redshifts) by the existing hydro-simulations, which are forced to adopt small computational boxes to increase the small-scale resolution [e.g. the discussion in (Viel et al. 2001)]. Even more interesting is the possibility to combine our scheme with a hydro-code, using the former to provide the large-scale skeleton of the IGM and the latter to achieve the required resolution on small scales.
The approach most closely related to ours is that recently proposed by Jones (1996, 1999), which aims at modelling the nonlinear clustering of the baryonic material. In Jones’ model, however, a different set of variables is adopted, which does not allow a direct comparison neither with the present scheme, nor with the Zel’dovich and the adhesion approximations, in the collisionless limit. The most important difference is that, in Jones’ model, the external random term is identified with the local gravitational potential , which is treated as an ‘external’ one, as it is essentially generated by the dominant DM component; moreover, no explicit account for the gas pressure is given. In our model instead, the external potential is obtained by linearly approximating the composition of Hubble drag, local gravity and thermal pressure which act on the IGM fluid elements; in our case is an external potential because it is determined by a convolution of the initial gravitational potential with the IGM linear filter.
It should be clear from this introduction that the forced Burgers equation might have wider applications in the cosmological structure formation problem. Its relevance (in terms of the closely related random heat equation) in the cosmological framework has been first advocated by Zel’dovich and collaborators in the mid eighties (Zel’dovich et al. 1985, 1987), as a means to describe the possible origin of intermittency in the matter distribution. The intermittency phenomenon consists in the appearance of rare high peaks, where most of the matter is concentrated, separated by vast regions of reduced intensity. From the statistical point of view, intermittency in a stochastic process is signalled by an anomalous scaling of e.g. structure functions (moments of velocity increments): moments of order , made dimensionless by suitable powers of the second-order moment, grow without bound on small scales (e.g. Gärtner J., Molchanov 1990; Frisch 1995). This may be viewed as increased non-Gaussianity on small scales, which, in Fourier space, appears as a slow decrease in the amplitude of Fourier modes with increasing wavenumber and by a definite phase relation between them (Zel’dovich et al. 1985, 1987). Actually, the occurrence of this form of intermittency, which seems too extreme and far from our present understanding of the large-scale structure of the Universe, is usually obtained under special properties of the noise and only appears at asymptotically late times. Much more interesting for the structure formation problem is a second phenomenon, called intermediate intermittency, also described by the forced Burgers equation, which consists in the formation of a cellular, or network structure, with “thin channels of raised intensity (the rich phase), separating isolated islands of the poor phase” (Zel’dovich et al. 1985). This second phenomenon is expected to arise as an intermediate asymptotic situation.
Can one take advantage of the description of the structure formation process in terms of intermediate intermittency to predict the specific non-Gaussian statistics which characterizes the nonlinear density field? This is, we believe, a challenging issue, which would deserve further analysis.
The prototype distribution that describes intermittency is the Lognormal one, which naturally arises in multiplicative processes, through the action of the Central Limit Theorem (e.g. Shimizu & Crow 1988). In the DM case, Coles & Jones (1991) proposed that a local Lognormal mapping of the linear density field can describe the nonlinear evolution of structures in the Universe. Detailed comparison with N-body simulations showed that this model fits very well the bulk of the probability density function (PDF) of the mass density field in N-body simulations, for moderate values of the rms overdensity (e.g. Bernardeau & Kofman 1995). Where the Lognormal fails is in reproducing the correct PDF for the strong clustering regime (), as well as the high- and low-density tails of the PDF, even on mildly nonlinear scales. Moreover, the predicted spatial pattern is too clumpy and poorly populated of extended structures to reproduce that of simulations (Coles, Melott & Shandarin 1993). A ‘skewed’ Lognormal PDF is proposed by Colombi (1994), to follow the transition from the weakly to the highly nonlinear regime.
Quite noticeably, the Lognormal model has also been used in connection with the IGM dynamics. Indeed, the semi-analytical model proposed by Bi and collaborators (Bi 1993; Bi et al. 1995; Bi & Davidsen 1997; see also Feng & Fang 2000; Roy Choudhury, Padmanabhan & Srianand 2000; Roy Choudhury, Srianand & Padmanabhan 2000; Viel et al. 2001), to simulate the low-column density Ly forest, is based on a local Lognormal model, similar to the one of Coles & Jones (1991). Comparison with the results of more refined techniques has shown that it provides a good fit of the column density distribution in a wide range of values, but it tends to underestimate the abundance of lower column density systems (Hui, Gnedin & Zhang 1997). Moreover, the Lognormal model for the IGM tends to produce an excess of saturated absorption lines in the simulated transmitted flux, compared with real QSO spectra.
The plan of the paper is as follows. In Section 2 we review the equations which govern the Newtonian dynamics of a two-component fluid of dark matter (DM) and baryons, in the expanding Universe; these are solved in Section 3 at the Eulerian linear level and under fairly general assumptions on the baryon equation of state. This allows to obtain the IGM filter connecting the linear baryon density fluctuations to the DM ones. A simplified version of our model is presented in Section 4: it enables one to follow the combined dark matter and baryon dynamics on weakly nonlinear scales, within the laminar regime. In Section 4 we also give the first-order Lagrangian solution for the baryon dynamics, which is then used to perform numerical simulations of the IGM distribution. In Section 5 we discuss the modifications introduced in the baryon dynamics by our improved final model, where we add a kinematic viscosity term, to avoid the occurrence of shell-crossing singularities (arising also in the baryon fluid, because of the approximate treatment of pressure gradients). This leads to our stochastic adhesion model for the dynamics of the intergalactic medium. According to this model, the approximate IGM dynamics is governed by the forced Burgers equation for the baryon peculiar velocity field, whose solution can be expressed as a path-integral. In the physically relevant limit of small viscosity, a solution can be found through the standard saddle-point technique. In Section 6 we discuss how to implement our solution in terms of the first-order Lagrangian particle trajectories previously obtained. A geometrical algorithm is also outlined, which allows to draw the skeleton of the IGM distribution, given a realization of the gravitational potential and the linear IGM filter. A preliminary analysis of the statistical properties of the density field obtained through the stochastic adhesion model is given in Section 7. The concluding Section 8 contains a brief discussion on the possible applications of our model.
2 Dynamics of dark matter and baryons in the expanding Universe
The Newtonian dynamics of a self-gravitating two-component fluid, made of collisionless dark matter and collisional baryonic gas is governed by the continuity, Euler and Poisson equations. The continuity equation for the dark matter component (indicated by a subscript ) reads
where is the mass density, the peculiar velocity and the Hubble parameter at time ; the Euler equation reads
where is the peculiar gravitational potential. For the baryon fluid (indicated by a subscript ), we have
where is the pressure. The peculiar gravitational potential obeys the Poisson equation , where the mass-density fluctuation takes contribution both from dark matter and baryons. Let then and be the mean mass fraction of these two types of matter. We have
where is the Hubble constant, is the closure density of non-relativistic matter (both dark matter and baryons) today, and are the fractional DM and baryon overdensities. In the analytical calculations which follow we will neglect, for simplicity, the self-gravity of the baryons, i.e. we will put .
To close the system we need the IGM equation of state, which can be taken of the polytropic form
where is Boltzman’s constant, the adiabatic index, the mean molecular weight (for fully ionized gas with primordial abundances it is about ), the proton mass and the baryon mean density. In writing the pressure term we have assumed the power-law temperature-density relation where is the IGM temperature at mean density, at redshift (e.g. Hui & Gnedin 1997; Schaye et al. 1999; McDonald et al. 2000). This is adequate for low to moderate baryon overdensity (), where the temperature is locally determined by the interplay between photoheating by the UV background and adiabatic cooling due to the Universe expansion. The underlying assumption for this ‘equation of state’ is a tight local relation between the temperature and the baryon density, which is only true for unshocked gas (e.g. Efstathiou et al. 2000).
In what follows it will prove convenient to change time variable from the cosmic time to the scale-factor (e.g. Shandarin & Zel’dovich 1989; Matarrese et al. 1992; Sahni & Coles 1995). This defines new peculiar velocity fields . Let us also introduce the dimensionless comoving densities and a scaled gravitational potential . For the redshift range of interest here (), one can write . A more exact treatment of the DM component would require the use of the growing mode of linear density perturbations, , as time variable (e.g. Gurbatov et al. 1989; Catelan et al. 1995); once again, for the range of redshifts of interest here one can safely assume . In the Einstein-de Sitter case the present treatment becomes exact.
We have the following set of equations for the DM component
where denotes the convective, or Lagrangian, derivative w.r.t. our new time variable . For the baryons, we have
The Poisson equation also gets simplified:
3 Linear theory
3.1 Dark matter in the linear regime
Let us start by writing the above set of equations for the DM component in the (Eulerian) linear approximation. The continuity and Euler equations, respectively, simplify to
where, from now on, dots will denote partial differentiation w.r.t. the scale-factor . The solutions are well known, and we will simply report them here. Keeping only the growing mode terms we have
where .
3.2 Baryons in the linear regime
In order to make a similar analysis for the baryons we need to specify the time (or redshift) dependence of the baryon mean temperature , which generally depends upon the thermal history of the IGM, as well as on the spectral shape of the UV background.111Similar reasonings would actually also apply to the adiabatic index , which we will, however, approximate as being constant in what follows. We will here consider a simple, but fairly general law of the type which will allow to get exact solutions of the linearized baryon equations.
At high redshifts, before decoupling, the mean baryon temperature drops like ; when Compton scattering becomes inefficient adiabatic cooling of the baryons implies , which makes it practically vanish before reionization [this will justify our initial conditions for the evolution of the baryon overdensity in eqs. (18)]. As the Universe reionizes, the IGM temperature rises and a different redshift dependence takes place. Various types of dependence have been considered in the literature. The linear law is often assumed for simplicity. According to Miralda-Escudé & Rees (1994) and Hui & Gnedin (1997), long after hydrogen reionization has occurred the diffuse IGM settles into an asymptotic state where adiabatic cooling is balanced by photoheating, leading to the power-law , with . Much steeper an exponent, , has been recently adopted by Bryan & Machacek (2000) and Machacek et al. (2000). More complex is the picture which emerges from hydrodynamical simulations of the IGM, where the mean gas temperature appears to retain some memory of when and how it was reionized (e.g. Schaye et al. 2000). Observational constraints on should also be taken into account (e.g. Shaye et al. 1999, 2000; Bryan & Machacek 2000; Ricotti et al. 2000; McDonald et al. 2000).
In view of this variety of assumptions and results it seems reasonable to look for solutions of our equations assuming a general exponent (although, in practical applications we will focus on cases with ). Moreover, as we will see later, our model can be straightforwardly extended to any redshift dependence of the mean IGM temperature.
Solutions of the hydrodynamical equations for general values of have been obtained by Bi, Börner and Chu (1992). Many authors (Peebles 1984; Soloveva & Starobinskii 1985; Shapiro, Giroux & Babul 1994; Nusser 2000) have given solutions for the case . In particular, Nusser (2000) has obtained the linear IGM overdensity in the case , extending his calculation to the case of non-negligible baryon fraction (i.e. for ), i.e. accounting for the baryon self-gravity. We will give here an extensive presentation of this problem for general values of , both because we are going to use the linear baryon overdensity in our nonlinear model for the IGM dynamics, and because of the specific form taken by the solutions for our set of initial conditions, which had not been previously obtained in the literature. The particular case needs to be studied separately; this will be done in the next subsection.
Let us start by writing the linearized continuity and Euler equations for the baryonic component,
Combining these equations together, using the linear solutions for the DM and our temperature-redshift relation we obtain, in Fourier space,
The redshift-independent wavenumber is related to the comoving Jeans wavenumber through
Only for the two wavenumbers coincide and the Jeans length becomes a constant.
Equation (16) will be solved with the initial, or, more precisely, matching conditions at ,
which are appropriate if the IGM undergoes sudden reionization at (Nusser 2000).
3.2.1 Case
In the case , we try a solution of the type , where and can be found by substitution back into equation (16). This gives
where and and are integration constants. From the latter expression we see that on large scales the homogeneous part only contains decaying modes; on smaller scales, instead, the homogeneous part is characterized by an oscillatory behavior with decaying amplitude. Asymptotically in time one recovers the well known solution (e.g. Peebles 1993) .
In spite of the absence of growing modes, the homogeneous part of our solution does not necessarily become negligible at the time scales relevant to our problem. It is therefore important, to exactly evaluate the homogeneous part, by relating it to our initial conditions above. For the two constants of integration we find and , where
The baryon peculiar velocity immediately follows from the linear continuity equation. We obtain
3.2.2 General case:
We will now look for the solution of eq. (16), for arbitrary values of , excluding the case . In order to find the full solution to the inhomogeneous equation (16), we first notice that its homogeneous counterpart is solved in terms of Bessel functions, namely, , where and we introduced the independent variable . The full solution of eq. (16) for any , obtained by the standard Green’s method, reads
where is the Lommel function.
By imposing the initial conditions (18) on our solution, we obtain and , where
and the Bessel and Lommel functions are evaluated at .
Once again, by replacing this expression in the continuity equation, we obtain the linear peculiar velocity of the baryons,
3.3 IGM linear filter
The results of the previous section indicate that one can always express the linear baryon density field as a linear convolution of the linear DM one. In Fourier space, one can therefore define an IGM linear filter, , such that
For the case we obtain
which corresponds to eq. (A7) of (Gnedin & Hui 1998). Let us look at the main features of the IGM filter, at a finite time after reionization. It is immediate to check that on large scales tends to unity: on scales much larger than the baryon Jeans length, the baryon distribution always traces the DM, due to the negligible role of pressure gradients. On scales much smaller than the Jeans length, instead, the baryon overdensity undergoes rapid oscillations. As noticed by previous authors (e.g. Gnedin & Hui 1998; Nusser 2000), and contrary to naive expectations, has no power-law asymptote on small scales.
In the general case , instead we have
It is worth mentioning that for half-integer values of the Bessel and the Lommel functions in the above equation can be written in terms of trigonometric functions.
Analogous relations hold for the baryon peculiar velocity in terms of the DM one [see eqs. (21) and (24) above]. Also in this case, of course, in the large-scale limit, as it would be easy to see. The oscillating behavior of on small scales is in general more complicated and depends upon the value of .
Plots of the IGM linear filter for some values of are given in Figure 1, at various redshifts. Hydrogen reionization is assumed to have occurred at a redshift . Note that, on scales smaller than the baryon Jeans length, acoustic oscillations, with decaying amplitude, persist even long after reionization. Note also that different equations of state lead to a different small-scale behaviour even if the IGM density is plotted vs. the scaled wavenumber . This is because eq. (16), which governs the linear evolution of baryon density fluctuations, retains a dependence on the slope in the last term on the LHS, which describes the effect of pressure gradients.
Figure 1: Plots of the ratio of the baryon to dark matter density, , versus at different redshifts, for various values of in the mean IGM temperature-redshift relation, . Hydrogen reionization is taken at .
4 Modelling the weakly nonlinear IGM dynamics
With our choice of time variable, the Euler equation, both for the DM and the baryons, takes the simple form
where the potential takes a different expression for the two fluids. As the force acting on both fluids is conservative, in the absence of initial vorticity the flow remains irrotational (actually, this is only true prior to shell-crossing, for the DM component), and we can express the peculiar velocity in terms of a velocity potential, . The potential in the RHS of the Euler equation reads
for the DM and IGM components respectively.
The dynamical model we introduce here is largely inspired by the Zel’dovich approximation (Zel’dovich 1970), and is based on replacing the potential , which should be consistently calculated using the full set of equations for the two fluids, by a mock external potential, obtained by evaluating its expression within linear (Eulerian) perturbation theory. Indeed, this model might be seen as only the first step of a more general, iterative approximation scheme.
In order to evaluate to linear order the quantity one can either compute to first order any single contribution or compute its Laplacian by taking the divergence of the LHS of the linearized Euler equation. This yields , and, using the linearized continuity equation,
We thus conclude that, to first order in perturbation theory the potential is given by the second ‘time’-derivative of the linear overdensity. This fact immediately implies that, to first order, (neglecting the contribution from decaying modes), while
In particular, for only the homogeneous part of the solution contributes, and we obtain
with and the integration constants and as in eq. (20). For we get
where and are given by eq. (23). Plots of the function are given in Figure 2, at different redshifts after reionization, for various values of .
Figure 2: Plots of the ratio versus at different redshifts, for various values of . The reionization redshift is .
Therefore, our model is described by the two Euler equations:
for the DM component, and
for the baryons.
The solution of the above DM equation of motion is well known: it corresponds to the Zel’dovich approximation, according to which mass elements move along straight lines, with constant ‘velocity’ impressed by local linear fluctuations of the gravitational force at their initial Lagrangian location : . We therefore have
The trajectories of the IGM fluid elements are instead more complex, as our equation implies a non-zero force acting on them,
where we have formally extended the time integration from to , as, before reionization, when the Jeans length is negligible, .
It will also prove convenient in the following to transform the baryon Euler equation (34) into the following Bernoulli, or Hamilton-Jacobi, equation for the velocity potential:
The extreme simplicity of our scheme is shown by the fact that the only information needed to evolve the baryon distribution in the weakly nonlinear regime is the IGM filter . It is immediate to realize that, because of our derivation of , its expression, , in terms of the linear baryon overdensity is fully general, i.e. it would apply to general reionization histories [i.e. to general relations], general baryon equations of state and to the case in which the baryon self-gravity is properly taken into account (i.e. the general case ). Note that the IGM linear filter is simply the ratio of the baryon to DM transfer function at the given time (provided the reionization process has been taken into account in evaluating the baryon transfer function ).
One might think that there is some degree of arbitrariness in the particular choice we made of which terms to linearize [those on the RHS of eq. (28)] and which ones to treat exactly (those on its LHS). The strongest motivation for such a choice is its analogy with the procedure leading to the Zel’dovich approximation for the DM component. In this sense, the choice is unique; as we will see in the next subsection, in fact, the external force obtained above and its effect on the IGM trajectories are closely connected to the results of first-order Lagrangian perturbation theory: no other choice would have provided such a connection. It should also be stressed that other successful approximation schemes for the evolution of the DM component, such as the ‘frozen flow’ (Matarrese et al. 1992) and ‘frozen potential’ (Brainerd, Scherrer & Villumsen 1993; Bagla & Padmanabhan 1994) ones, are indeed based on the same choice. The physical reason which makes a linear theory evaluation of reasonably accurate is that this quantity, for both fluids, contains terms that receive their dominant contribution from small wavenumbers. This was indeed the original motivation which led Zel’dovich to obtain his celebrated algorithm, although in modern language the Zel’dovich approximation is more commonly explained within the first-order Lagrangian approximation (e.g. Sahni & Coles 1995 and references therein). The link between these two alternative derivations is discussed in the next subsection.
4.1 Lagrangian approximation to the baryon trajectories
The trajectory of any mass element can be written in the general form
where is the ‘displacement vector’. As far as the evolution of the fluid is far from the strongly nonlinear regime, the displacement vector can be considered to be small, i.e. the Eulerian and Lagrangian positions of each particle are never too far apart. This consideration is at the basis of Lagrangian approximation methods. The basics of the first-order Lagrangian scheme applied to our two-component fluid are reported in Appendix A. Applying these ideas to our external force, we obtain
To first order in the displacement vector we can therefore replace the Eulerian force by its Lagrangian counterpart in the baryon trajectories, which leads to the much simpler form
where the ‘baryon potential’ is defined by
and is related to the peculiar gravitational potential by the Fourier-space expression .
Moreover, as shown in Appendix A, the trajectories described by eq. (40) represent the result of first-order Lagrangian perturbation theory applied to our baryon gas. Similar results have been recently obtained by Adler & Buchert (1999) for the case of a single self-gravitating collisional fluid (i.e. for ). There is an important difference between these trajectories and the DM ones. According to the Zel’dovich approximation, DM particles move along straight lines with constant velocity, whereas the baryons are generally accelerated along curved paths; this is due to the non-zero force, resulting from the composition of three terms: the local Hubble drag, the local gravitational force caused by the dominant DM component and the gradient of the gas pressure.
The baryon peculiar velocity which follows from this first-order Lagrangian approximation is
which, unlike the DM one, deviates from the initial velocity .
Let us mention an important property of the approximate expression for the velocity field that we obtained using first-order Lagrangian perturbation theory: unlike what happens in the DM case, the vector of eq. (42) is irrotational in Eulerian space only if we restrict its validity to first-order in the displacement vector; a non-zero velocity curl component, , in fact arises beyond this limit. This feature is not expected to imply serious problems as long as the system has not evolved too deeply into the nonlinear regime, i.e. until the baryon overdensity has not reached values . This is an unphysical feature deriving from the extrapolation of the first-order results to a regime where higher-order terms should be taken into account. A similar feature appears in the Lagrangian perturbation approach to the DM dynamics, when both the growing and decaying solutions are included (e.g. Buchert 1992). No problems of this type, however, occur when the Eulerian scheme above is adopted, i.e. when eq. (34) and its solution (36) are assumed. For this reason we will consider our ‘Eulerian model’ of eqs. (34) and (36) as the correct one and the Lagrangian scheme leading to eq. (40) as being essentially a shortcut to get approximated baryon trajectories.
The slight discrepancy between the Eulerian and Lagrangian schemes used to derive the external force acting on the baryons is a peculiarity of the collisional case. In the DM case, these two techniques – linearizing the RHS of eq. (28) in Eulerian space, and expanding to first order in Lagrangian perturbation theory – lead to identical results.
Approximation schemes for the low-density IGM dynamics which are closely related to our Lagrangian treatment have been studied by Reisenegger & Miralda-Escudé (1995), Gnedin & Hui (1996), and Hui, Gnedin & Zhang (1997). There are, however, important differences that we would like to point out: i) in our model the gas trajectories, and thus the resulting IGM spatial clustering depend on the specific ionization history (different values of in the simplest case); ii) our model is by definition able to exactly reproduce the behavior of the baryon component in the linear regime. In practice, while previous models adopt an IGM filter which is best-fitted to simulations, ours directly derives from baryon dynamics.
4.1.1 Numerical simulations of the IGM distribution in the laminar regime
In order to display the effect of the IGM filter on baryon trajectories, we have produced a set of numerical simulations based on the Lagrangian scheme. particles were laid down on a uniform cubic grid and moved according to eq. (40). Realizations of the peculiar gravitational potential were obtained in Fourier-space according to standard algorithms. The model shown in Figure 3 is a spatially flat, vacuum dominated cold dark matter (CDM) model, with (where is the Hubble constant, , in units of km s Mpc), a cosmological constant with present-day density parameter , CDM with and baryons with the remaining . A Zel’dovich primordial power-spectrum of adiabatic perturbations is assumed, with the standard normalization of CDM models222To correct for the small inaccuracy introduced by the use of , instead of the DM growth-factor, as time variable in our treatment, we apply a correction factor to the normalization, where is the linear growth-factor of DM density fluctuations, normalized to unity at ., (Viana & Liddle 1999), where is the rms mass fluctuation in a sharp-edged sphere of radius Mpc; we adopt the Bardeen et al. (1986) CDM transfer function, corrected to account for the baryon contribution, as in (Sugiyama 1995).
The box-size is Mpc. Three different IGM thermal histories are considered: in all cases we assume that reionization occurred at , but we assume that the mean IGM temperature evolves according to different power-laws afterwards, , with . Figure 3 shows a thin ( Mpc) slice of our simulation box at . The corresponding DM distribution is also shown for comparison. Note that different values of lead to a different small-scale distribution of the IGM, because of i) the different time dependence of the Jeans scale and ii) the different trajectories followed by the baryons after reionization. The comoving Jeans wavenumber has been set to the same value ( Mpc) in all our models, at redshift .
Figure 3: The spatial distribution of the dark matter and IGM, with different temperature-redshift relations, are shown at redshift . All models have the same Jeans length, , at this redshift.
4.1.2 The IGM density field in the laminar regime
Once particle trajectories are known, it is immediate to obtain the density field, using mass conservation, . If we adopt the Lagrangian expression of eq. (40), we get
The strain tensor can be locally diagonalized along principal axes, whose direction will generally depend upon time, unlike the DM case. Calling the corresponding eigenvalues, we write
which shows that a caustic singularity will form at the finite time , where is the largest eigenvalue of the time-dependent strain tensor. The time dependence of the eigenvalues, which on small scales becomes oscillatory, implies that the shell-crossing condition can even be met more than once by a given mass element along the same principal axis.
The extrapolation of our Lagrangian approximation beyond the formation of the first pancakes, leads to an artificial diffusion of these structures. This problem becomes more and more severe at low redshift, making any simplified description of the IGM – and of the Ly forest – based on the Lagrangian trajectories quite unreliable. The stochastic adhesion model presented in the next section aims precisely at overcoming this problem.
One might use eq. (44) for the density field to obtain an analytical expression for the probability distribution function (PDF) of the IGM density field, by a simple extension of the traditional Doroshkevich (1970) formalism (e.g. Kofman et al. 1994). Such a technique has been applied by Reisenegger & Miralda-Escudé (1995), who adopted a smoothed Zel’dovich approximation for the IGM evolution. Our model, however, allows to get a more refined description in which the PDF is sensitive to the IGM thermal history. Similarly, one might also obtain statistical information on the Ly forest in quasar absorption spectra, e.g. in terms of the column density distribution function of the lines (e.g. Hui, Gnedin & Zhang 1997); this too would display a dependence on the assumed , which might be tested against observational data. These, and related applications of our Lagrangian algorithm will be presented elsewhere.
The emergence of shell-crossing singularities in the velocity pattern of our collisional fluid should be understood as an artifact of the linearized treatment of pressure gradients in the Euler equation. This feature can be also seen as follows. If we evaluate the force on large scales, to lowest order in we find, for ,
which comes only from the homogeneous part of eq. (19), and
for , from the inhomogeneous term of eq. (22). In both cases the result can be expressed in the form , where , for . This shows that, as long as the velocity is in the linear regime, the effect of the linearized pressure gradient is similar to that of an effective kinematical viscosity333A related point has been recently made by Buchert & Domínguez (1998)., with coefficient , which smooths the velocity field. As soon as the system enters the mildly non-linear regime, however, deviates from its initial value and this smoothing is no more effective in stopping the formation of caustic singularities in the densest regions.
Two processes will actually prevent the formation of multi-streams in our physical system: the first is due to the binding action of gravity, which tends to keep pancakes and filaments thin, an effect which is experienced both by the DM and IGM components; the second is the nonlinear action of the gas pressure, which is instead characteristic of the baryonic component. Because of the difficulty to deal analytically with the gas pressure beyond any perturbative approximation, we will find convenient in the next section to introduce an artificial viscosity term in the baryon Euler equation, whose effect is to smear these shell-crossing singularities of the velocity field.
5 The stochastic adhesion approximation
The model for the formation of structures in the baryon distribution discussed in the last section is based on a suitable approximation for the force exerted on fluid elements by gravity and by the surrounding fluid patches through pressure gradients. The latter are most important on small scales where their effect is to smooth the gas fluctuations relative to the DM ones, thus preventing the occurrence of shell-crossing singularities in the collisional component; nonlinear effects here manifest themselves through the formation of shock-waves. In our scheme, just like in all similar schemes [e.g. the modified Zel’dovich scheme used by Gnedin & Hui (1996) and Hui, Gnedin & Zhang (1997)], the gas pressure is only included through a linear approximation. Because of this fact, when the system enters the nonlinear regime, i.e. when two particles come very close in space, shell-crossing can affect their evolution leading to the subsequent occurrence of multi-streams. In order to prevent this unphysical phenomenon we will adopt the same technique which proved so successful in the DM case to extend the Zel’dovich treatment beyond its actual range of validity: we add a kinematical viscosity term to our approximate Euler equation. This method is at the basis of the adhesion approximation for the formation of large-scale structure in the collisionless case (Gurbatov et al. 1985, 1989; Kofman & Shandarin 1988). The adhesion model is based on the three-dimensional generalization of Burgers’ equation of strong turbulence. It is the simplest equation which describes the formation and subsequent merging of shocks. According to the adhesion model, DM particles move according to the Zel’dovich approximation until they fall into pancakes, when, owing to the viscous force, they stick together. Next, pancakes drain into filaments, and filaments into clumps. The thickness of these structures is monitored by the value of the kinematical viscosity coefficient , and becomes infinitely thin as .
We assume that the equation of motion which governs the IGM dynamics is (from now on we will avoid the subscript ‘b’ on baryon quantities, where unnecessary)
where the kinematical viscosity coefficient is here assumed to be small, but non-zero. In our collisional case, moreover, there can be an extra reason to add such a term: the gas is effectively experiencing some shear viscosity, although on scales much smaller than those under consideration. The physical viscosity term should actually depend upon space and time, through the inverse of the local gas density. One might however argue that, in the physically relevant limit, where only a tiny viscous force is present, even a constant will produce the correct qualitative trend. It might be interesting to mention an alternative interpretation of the viscosity term in self-gravitating systems. According to Domínguez (2000), a term which resembles the one for kinematical viscosity is the unavoidable consequence of the coarse-graining process inherent in a hydrodynamical description.
The equation above is known as the forced Burgers equation of nonlinear diffusion (e.g. Barabási & Stanley 1995; Frisch & Bec 2000). Its possible application to the cosmological structure formation problem was suggested long ago by Zel’dovich et al. (1985, 1987) and recently studied in greater detail by Jones (1996, 1999). There are a number of differences between our approach and the one by Jones, which lead to a different dynamics of the IGM. First, we used a different time variable (the scale-factor instead of the cosmic time ) to define the baryon velocity field and its acceleration, thus leading to the form of eq. (28), where the force on the RHS was replaced by its linearized expression. Second, we do include some effect of the baryonic pressure in our external random potential , which is instead absent in Jones’ treatment, where the role of is played by the linear gravitational potential generated by the dark matter component. Thus, our scheme, unlike the one by Jones, is able to exactly reproduce the evolution of the baryons at the linear level. Finally, our random potential has a non-trivial time dependence, unlike the one assumed by Jones.
The forced Burgers equation can be transformed into a Bernoulli-like equation for the velocity potential, namely
The problem is fully specified once the initial conditions for the velocity potential and the statistics of the noise are given. Our initial velocity potential is . The statistics of the noise follows directly from that of the linear gravitational potential, which we assume to be a Gaussian random field. Then, also is a Gaussian process with zero mean and auto-correlation function
where the average is over the ensemble, is the linear power-spectrum of DM density fluctuations, extrapolated to the present time, and is the spherical Bessel function of order zero. According to this formula, is a stochastic process with the following properties:
i) It is a colored (i.e. non-white) Gaussian random process both in space and time.
ii) It is stationary (i.e. homogeneous and isotropic) in space but not in time (as its auto-correlation function does depend on only, but not on : this is typical of the cosmological case.
iii) On small scales, oscillates rapidly in time for all , with a period which generally depends on the wavenumber.
iv) The non-separability of the space and time dependence of its auto-correlation function implies that behaves as a stochastic process both in space and time. More precisely, the time evolution of the noise in each given spatial point cannot be predicted on the basis of local initial conditions only, as it depends on the realization of the Gaussian field over the whole Lagrangian space, through the time evolution of the IGM filter.
Let us finally stress an important peculiarity of our cosmological application of the forced Burgers equation: the same stochastic process which underlies the external random potential also provides the initial condition for the velocity potential, .
A plot of the function vs. redshift is given in Figure 4 for various values of , to display the time-dependence of the random force and, in particular, its oscillatory character on small scales. In Figure 5 the dimensionless power-spectra of two terms contributing to the evolution of the velocity potential are shown: that of the random potential , , and that of the linear velocity potential , multiplied by a factor to give an estimate of the ‘deterministic’ term in eq. (48), namely . Note that the two contributions become of the same order at at the relevant redshifts; on smaller scales the random potential dominates the IGM dynamics.
Figure 4: The evolution of the ratio , for various models of the IGM mean temperature evolution.
Figure 5: The dimensionless power-spectra of the stochastic and linear deterministic terms in the evolution of the IGM velocity potential are shown for at (left panel), (middle panel) and (right panel). The normalization of the linear density power-spectrum, , is chosen arbitrarily.
The above evolution equation for , in cases when the random potential is white-noise in time, has been extensively studied in condensed matter physics, where it became popular as the Kardar-Parisi-Zhang (KPZ) equation (Kardar, Parisi & Zhang 1986). The KPZ equation describes the growth of an interface under random particle deposition. Here the viscosity coefficient plays the role of temperature and the velocity potential is interpreted as the height above an initially flat surface, which is driven by the random noise and gradually becomes rough.
5.1 Solving the random heat equation
Equation (48) can be easily related to a linear, partial-differential equation, by means of the nonlinear Hopf-Cole transformation (e.g. Burgers 1974), which leads to the linear diffusion equation with a multiplicative potential, also called ‘random heat’ equation
Starting from the solution of the latter equation for the Hopf-Cole transformed velocity potential [which has been dubbed ‘expotential’ by Weinberg & Gunn (1990a)] one can easily find the velocity by transforming everything back
One first obtains an expression for the ‘transition kernel’ , representing the particular solution obtained from a Dirac delta function, , at the initial time . This has a formal solution given in terms of the (Euclidean) Feynman-Kac path-integral 444The solution of eq. (50) in the absence of a potential, i.e. for the ‘free’ adhesion approximation, is reviewed in Appendix B. formula (e.g. Feynman & Hibbs 1965), namely
where the action is given by
with the Lagrangian of a particle moving in the potential .
In our diffusion equation the transition kernel is immediately understood as the conditional probability of finding a particle in , at time , given that it was initially in the Lagrangian position . To better understand the path-integral solution it is however convenient to think it in connection with a quantum mechanical problem. It is in fact immediate to realize that an inverse ‘Wick rotation’ of the time variable , together with the formal replacement transforms eq. (50) into the Schrödinger equation for a particle, of unit mass, subjected to the potential . According to the path-integral representation of quantum mechanics, its solution is obtained by ‘integrating’ over all possible paths connecting these two end-points, each one weighed by the action , calculated along this path.
Once the kernel is known, the solution of the random heat equation with appropriate initial conditions is obtained through the application of the Chapman-Kolmogorov equation (e.g. van Kampen 1992), by integrating the product of the initial function with the transition kernel over the whole Lagrangian space, namely . In our case . Thus,
In the limit of vanishing viscosity (corresponding to the classical limit, , in our quantum mechanical analog) the dominant contribution to the path-integral comes from the ‘classical’ path, i.e. that which satisfies the Euler-Lagrange equations of motion,
which in our case leads to Newton’s second law, . Thus, the particle trajectories along the classical path read
with general initial velocity . Note that there are still infinitely many classical trajectories joining the two end-points and , corresponding to the freedom to choose the initial velocity .
We can now expand around the classical trajectories in the standard manner of quadratic approximations (e.g. Feynman & Hibbs 1965), that is , subject to the constraint that the fluctuations around the classical trajectory vanish at the end points, namely . We then have
with the symbol standing for functional differentiation. The classical action, , is a function of the end-points and of the time interval , which has to be calculated by replacing the solution of the Euler-Lagrange equations into its expression.
The action is an extremum for the classical trajectory, thus |
58cd496d0a88d6a3 | I heard in wave optics and electromagnetism that Hamilton could have discovered the Schrödinger equation, or that he was the first man who used the expression
$$ \Psi(x)= \exp(i S(x)/\hbar)\,. $$
I also heard that Hamilton got the idea from the eikonal equation
$$ (\nabla S)^{2}=n(x)\,, $$
but that he couldn't complete it.
He was trying to show that light particles could also be described as a wave, but had no way to prove it. Is this what happened?
• 3
$\begingroup$ Hamilton did not discover quantum mechanics. But he discovered some mathematics which was later used in quantum mechanics. $\endgroup$ – Alexandre Eremenko Mar 4 '16 at 21:05
It is more accurate to say that Hamilton anticipated some of the ideas of mathematics and heuristics of quantum mechanics, that would later inspire Schrödinger to produce his formulation of wave mechanics. The reason he was able to anticipate those ideas is that the quantum wave-particle duality had a classical predecessor, the optico-mechanical analogy. Indeed, it was optics, not mechanics, that originally inspired the Hamiltonian formalism. But no, Hamilton was not about to discover the Schrödinger equation. There is a qualitative difference between the first order Hamilton-Jacobi equation, and the second order Schrödinger equation, the connection is only recovered in the quasi-classical limit. And it is only for the latter that the stationary phase approximation for integrals of $\exp(i S(x)/\hbar)$ is used. The approximation itself was only introduced by Kelvin in 1887, Hamilton's approach was more geometric.
In a simple form the analogy was discovered by Huygens around 1670 (published in Traitė de la Lumiere, 1678), who noticed that propagation of waves could be dually described in terms of wavefronts and rays ("characteristics") perpendicular to them. The latter can be considered as trajectories of particles, and massive amounts of particles spreading along characteristics can create the appearence of a continuous wave. But it also works vise versa, and Huygens suggested that light may well be a wave, with geometric optics of rays being only the first approximation. In particular, Huygens showed how Fermat's least time principle follows from the analogy, Johann Bernoulli used it to solve the famous brachistochrone problem in 1696, and in 1818 Fresnel showed how not only geometric optics but also diffraction and interference can be explained by wave optics, which led to its wide acceptance in the 19th century.
But it was Hamilton who explored the analogy in its full generality. As Guillemin writes in Geometric Asymptotics:
"In 1828 Hamilton published his fundamental paper on geometrical optics, introducing his "characteristics", as a key tool in the study of optical instruments. It wasn't until substantially later that Hamilton realized that his method applied equally well to the study of mechanics. Hamilton's method was developed by Jacobi and has been a cornerstone of theoretical mechanics ever since... It is interesting to note that although Hamilton was aware of the work of Fresnel, he chose to ignore it completely in his fundamental papers on geometrical optics."
The eikonal equation is only the simplest case of the Hamilton-Jacobi equation, when the Hamiltonian of the mechanical system only has the standard kinetic energy term. In general, the familiar equations of Hamilton dynamics are solved by the characteristics of the corresponding Hamilton-Jacobi equation. Although the method of stationary phase is often used to derive geometric asymptotics today, it was not available to Hamilton.
When the conflict between the wave optics and the newly introduced light quanta Schrödinger, a big fan of Hamiltonian dynamics, was reminded of the optico-mechanical analogy, and started thinking about classical mechanics being the limit of a new mechanics along the lines of geometric optics being a limit of wave optics. This led him to his celebrated equation. In the quasi-classical limit the surfaces of equal phase are the "wave fronts", and the trajectories of the particles are the characteristics. A systematic development of this approach is known as the WKB (Wentzel–Kramers–Brillouin) method.
| improve this answer | |
What Hamilton discovered is a mathematical "Hamiltonian formalism". It was applied to those parts of physics which were known at the time of Hamilton: classical mechanics and optics. There was no slightest reasons at the time of Hamilton to suspect that matter on small scale does not obey the laws of classical mechanics.
That this is so, is a late 19s century discovery. However it turned out that Hamiltonian formalism is so general that it applies to quantum mechanics as well. Thus many equations written by Hamilton (and not only by Hamilton, but by other late 18s and early 19s century researchers in mechanics, like Lagrange) actually apply, if they are correctly interpreted.
Similarly, Calculus, a mathematical tool invented in 17s century serves not only mechanics, but all physics discovered later. But nobody claims on this ground that Newton and Leibniz discovered all physics which was discovered later.
EDIT. Of course this is a miracle that the same mathematical tool applies to a very wide class of phenomena in the universe, including those which were not known when the tool was invented/discovered, but this is the way our world is created.
| improve this answer | |
Your Answer
|
42fd69795a2f2872 | Schedule for: 17w5010 - Mathematical and Numerical Methods for Time-Dependent Quantum Mechanics - from Dynamics to Quantum Information
Beginning on Sunday, August 13 and ending Friday August 18, 2017
All times in Oaxaca, Mexico time, CDT (UTC-5).
Sunday, August 13
14:00 - 23:59 Check-in begins (Front desk at your assigned hotel)
19:30 - 22:00 Dinner (Restaurant Hotel Hacienda Los Laureles)
20:30 - 21:30 Informal gathering (Hotel Hacienda Los Laureles)
Monday, August 14
07:30 - 08:45 Breakfast (Restaurant at your assigned hotel)
08:45 - 09:00 Introduction and Welcome (Conference Room San Felipe)
09:00 - 10:00 Christiane Koch: Optimal control of open quantum systems: Theoretical foundations and applications to superconducting quantum devices
Quantum control is an important prerequisite for quantum devices [1]. A major obstacle is the fact that a quantum system can never completely be isolated from its environment. The interaction of a quantum system with its environment causes decoherence. Optimal control theory is a tool that can be used to identify control strategies in the presence of decoherence. I will show how to adapt optimal control theory to quantum information tasks for open quantum systems [2].
A key application of quantum control is to identify performance bounds, for tasks such as state preparation or quantum gate implementation, within a given architecture. One such bound is the quantum speed limit, which determines the shortest possible duration to carry out the task at hand. For open quantum systems, interaction with the environment may lead to a speed-up of the desired evolution. Here, I will show how initial correlations between system and environment may not only be exploited to speed up qubit reset but also to increase state preparation fidelities. Geometric control techniques provide an intuitive understanding of the underlying dynamics [3].
Control tasks such as state preparation or gate implementation are typically optimized for known, fixed parameters of the system. Showcasing the full capabilities of quantum optimal control, I will discuss how recent advances in quantum control techniques allow for going even further. Using a fully numerical quantum optimal control approach, it is possible to map out the entire parameter landscape for superconducting transmon qubits. This allows to determine the global quantum speed limit for a universal set of gates with gate errors limited solely by the qubit lifetimes. It thus provides the optimal working points for a given architecture [4].
While the interaction of qubits with their environment is typically regarded as detrimental, this does not need to be the case. I will show that the back-flow of amplitude and phase encountered in non-Markovian dynamics can be exploited to carry out quantum control tasks that could not be realized if the system was isolated [5]. The control is facilitated by a few strongly coupled, sufficiently isolated environmental modes. These can be found in a variety of solid-state devices including superconducting circuits.
[1] S. Glaser et al.: Training Schrödinger's cat: quantum optimal control, Eur. Phys. J. D 69, 279 (2015)
[2] C. P. Koch: Controlling open quantum systems: Tools, achievements, limitations, J. Phys. Cond. Mat. 28, 213001 (2016)
[3] D. Basilewitsch et al.: Beating the limits with initial correlations, arXiv:1703.04483
[4] M. H. Goerz et al.: Charting the circuit-QED Design Landscape Using Optimal Control Theory, arXiv:1606.08825
[5] D. M. Reich, N. Katz & C. P. Koch: Exploiting Non-Markovianity for Quantum Control, Sci. Rep. 5, 12430 (2015)
(Conference Room San Felipe)
10:00 - 10:30 Roberto León-Montiel: Simulation of Born-Markov Open Quantum Systems in Electronic and Photonic Systems
Controllable devices provide novel ways for the simulation of complex quantum open systems. In this talk, we will present different experimental platforms, developed in our group, where the dynamics of Born-Markov open quantum systems can be successfully simulated. In particular, we will discuss the observation of the so-called environment-assisted quantum transport in electrical oscillator networks, the survival of quantum coherence between indistinguishable particles propagating in quantum networks affected by noise, and the implementation of the first noise-enabled optical ratchet system.
(Conference Room San Felipe)
10:30 - 11:00 Coffee Break (Conference Room San Felipe)
11:00 - 12:00 Gabriel Turinici: Identification of quantum Hamiltonians in presence of non-perturbative noisy data
We focus on quantum systems subject to external interactions (laser, magnetic fields) taken as controls, which are contaminated with non perturbative noise. The measured observables come thus in the form of probability laws; we ask the following question: is it possible, from the knowledge of these probability laws, to recover the free and interaction (dipole) Hamiltonians ? We see that the theoretical answer is positive (provided some assumptions on the controllability of the quantum system hold); then we explore numerical approaches which exploit particular metrics on the space of probability laws.
(Conference Room San Felipe)
12:20 - 12:30 Group Photo (Hotel Hacienda Los Laureles)
12:30 - 14:00 Lunch (Restaurant Hotel Hacienda Los Laureles)
14:00 - 15:00 Barry Sanders
(Conference Room San Felipe)
15:00 - 15:30 Francois Fillion-Gourdeau: Numerical scheme for the solution of the Dirac equation on classical and quantum computers
A numerical scheme that solves the time-dependent Dirac equation is presented in which the time evolution is performed by an operator-splitting decomposition technique combined with the method ofcharacteristics. On a classical computer, this numerical method has some nice features: it is very versatile and most notably, it can be parallellized efficiently. This makes for an interesting numerical tool for the simulation of quantum relativistic dynamical phenomena such as the electron dynamics in very high intensity lasers. Moreover, this numerical scheme can be implemented on a digital quantum computer due to its simple structure: the operator splitting is a sequence of streaming operators followed by rotations in spinor space. This structure is actually reminiscent of quantum walks, which can be implemented efficiently on quantum computers. We determine the resource requirements of the resulting quantum algorithm and show that under some conditions, it has an exponential speedup over the classical algorithm. Finally, an explicit decomposition of this algorithm into elementary gates from a universal set is carried out using the software Quipper. It is shown that a proof-of-principle calculation may be possible with actual quantum technologies.
(Conference Room San Felipe)
15:30 - 16:00 Coffee Break (Conference Room San Felipe)
16:00 - 17:00 Tucker Carrington: Pruned multi-configuration time-dependent Hartree methods
I shall present two pruned, nondirect product multi-configuration time dependent Hartree (MCTDH) methods for solving the Schr¨odinger equation. Both use a basis of products of natural orbitals. Standard MCTDH uses optimized 1D basis functions, called single particle functions, but the size of the basis scales exponentially with D, the number of coordinates.By replacing t → −iβ , β ∈ R>0 , we use the pruned methods to determine solutions of the time-independent Schroedinger equation. For a 12D Hamiltonian, we compare the pruned approach to standard MCTDH calculations for basis sizessmall enough that the latter are possible and demonstrate that pruning the basis reduces the CPU cost of computing vibrational energy levels of acetonitrile by more than two orders of magnitude. One of the pruned MCTDH methods uses an algebraic pruning constraint. The other uses a flexible basis that expands as the calculation proceeds. Results obtained with the expanded basis are compared to those obtained with the established multi-layer MCTDH (ML-MCTDH) scheme. Although ML-MCTDH is somewhat more efficient when low or intermediate accuracy is desired, pruned MCTDH is more efficient when high accuracy is required.
(Conference Room San Felipe)
17:00 - 17:30 Carlos Argáez García: Numerical improvements in methods to find first order saddle points on potential energy surfaces
The minimum mode following method for finding first order saddle points on a potential energy surface is used, for example, in simulations of long time scale evolution of materials and surfaces of solids. Such simulations are increasingly being carried out in combination with computationally demanding electronic structure calculations of atomic interactions. Therefore, it becomes essential to reduce, as much as possible, the number of function evaluations needed to find the relevant saddle points. Several improvements to the method are presented here and tested on a benchmark system involving rearrangements of a heptamer island on a closed packed crystal surface. Instead of using a uniform or Gaussian random initial displacement of the atoms, as has typically been done previously, the starting points are arranged evenly on the surface of a hypersphere and its radius is adjusted during the sampling of the saddle points. This increases the diversity of saddle points found and reduces the chances of converging again to previously located saddle points. The minimum mode is estimated using the Davidson method, and it is shown that significant savings in the number of function evaluations can be obtained by assuming the minimum mode is unchanged until the atomic displacement exceeds a threshold value.
(Conference Room San Felipe)
18:00 - 20:00 Dinner (Restaurant Hotel Hacienda Los Laureles)
Tuesday, August 15
07:30 - 09:00 Breakfast (Restaurant at your assigned hotel)
09:00 - 10:00 Eric Cances: Mathematical models for electron transport in periodic and aperiodic materials: towards first-principle calculations
Electron transport in materials and mesoscopic is a very complex topic, not yet fully understood from a physical point of view. Some of the key mechanisms at the origin of electron transport have been identified though, and a wide variety of mathematical models have been proposed to account for these phenomena, among which the classical Drude model (1900), the Bloch model (1928), the BCS model (1957), the Anderson model (1958), the Hubbard model (1963), the Landauer-Büttiker formalism (1986), or the Haldane model (1988). Electron transport is described by a variety of models ranging from many-body quantum models, to semiclassical drift-diffusion models. Coherent transport inmesoscopic devices is usually using the Landauer-Büttiker formalism. For homogeneous periodic materials with or without defects, most of the available models are based on the quasiparticle picture of quasi Bloch electrons and holes scattered by phonons, impurities, and effective two-body interactions. Methods based on non-commutative geometry have also been introduced by Bellissard and collaborators to handle homogeneous aperiodic systems. In the last two decades, it has become possible to compute (more or less accurately) the various parameters of these models from first-principle calculations, using Density Functional Theory (DFT) and Green’s function methods (GW, Bethe-Salpeter equation). In this talk, I will review some of the recent progress and open questions in the mathematical understanding and numerical simulation of these models.
(Conference Room San Felipe)
10:00 - 10:30 Emilio Pisanty: Slalom in complex time: semiclassical trajectories in strong-field ionization and their analytical continuations
A large part of strong-field physics relies on trajectory-based semiclassical methods for the description of ionization and the subsequent dynamics, both for intuitive understanding and as quantitative models, including the workhorse Strong-Field Approximation and its various extensions. In this work I examine the underpinnings of this trajectory-based language in the saddle-point analysis of temporal integrals deformed over complex contours, and how this formalism can be extended to include interactions with the ion. I show that a first-principles approach requires the use of complex-valued positions as well as times, and that the evaluation of the ionic Coulomb potential at these complex-valued positions imposes new constraints on the allowed temporal integration contours. I show how the navigation of these constraints is infused with physical content and a rich geometry, and how the correct traversal of the resulting landscape has clear consequences on the photoelectron spectrum.
1. E. Pisanty and M. Ivanov. Phys. Rev. A 93, 043408 (2016).
2. E. Pisanty and M. Ivanov. J. Phys. B: At. Mol. Opt. Phys. 49, 105601 (2016).
(Conference Room San Felipe)
11:00 - 12:00 Thomas Brabec: Strong laser solid state physics
Recent HHG and ionization experiments in solids have given birth to the field of attosecond condensed matter physics. Potential applications range from solid state coherent xuv radiation sources, to resolving ultrafast processes in the condensed matter phase, to PHz (petahertz) opto-electronic elements. Theoretical analysis of these processes has so far been mainly confined to the single-active electron (SAE) limit. In the first part of the talk theoretical progress in understanding HHG and ionization in the SAE limit will be reviewed. Both, bulk and nano-confined systems will be explored. While SAE analysis can reasonably describe many features of strong field processes in solids on a qualitative level, there is no doubt that many-body effects play an important role quantitatively. Beyond that, we expect them to add additional signatures to the one-body results which will make them identifiable experimentally. In the second part of the talk many-body features of strong field processes in solids will be explored. We use a multi-configuration time-dependent Hartree-Fock (MCTDHF) method developed for atomic and molecular attosecond science. MCTDHF is used to model 1D model quantum wires in strong fields, which consist of a string of atoms with one electron per lattice site. Chains of up to more than 20 atoms are investigated. Our results exhibit clear multi-electron signatures in HHG spectra.
(Conference Room San Felipe)
14:00 - 15:00 Tsuyoshi Kato: An effective potential theory for time-dependent wave function
After the formulation of multi-configuration time-dependent Hartree-Fock (MCTDHF) method to treat electronic dynamics in atoms and molecules induced by the interaction with intense laser pulses from first principles [1], the theoretical efforts exerted on the developments of the method has been changed their aspects from the basic formulations and the proof-of-principle type calculations to practical calculations in order to elucidate the many electron dynamics by comparisons with experimental results [2].
Recently, efforts have been made to improve the numerical performance of the MCTDHF method aiming to reduce the size of the configuration space, i.e., Slater determinantal expansion length, by restricting the orbital excitation schemes [3,4].
A different approximation of factorized configuration interaction coefficients [5] as well as the multi-layer formulation of MCTDHF [6] have also been introduced recently.
In the present study, we propose a new formulation for the time propagation of a time-dependent multi-configuration wave function in which the spin-orbitals follow a single-particle time-dependent Schrödinger equation (TDSE) specified by a multiplicative time-dependent local effective potential $v_{\rm eff}(\mathbf{r},t)$.
We consider an $N$-electron time-dependent wave function $ \Psi(x_1,x_2,\cdots,x_N,t) $ perturbed by a time-dependent external field.
The wave function is assumed to be represented by $$ \Psi(x_1,x_2,\cdots,x_N,t) = \sum_{K=1}^{\mathcal{L}} C_K(t) \Phi_K(x_1,x_2,\cdots,x_N,t), $$ where ${\{C}_{K}(t)\}$ represent time-dependent configuration interaction coefficients and $\Phi_{K}(t)$ time-dependent Slater determinants.
The time-dependence of each Slater determinant is due to the time dependence of the constituent spin-orbitals.
The total Hamiltonian of the system is represented by $$ \hat{H}(t) = \hat{T} + \hat{V}_{\rm ext}(t) + \hat{V}_{\rm ee} $$, where $\hat{T}$, $\hat{V}_{\rm ext}(t) = \sum_{j=1}^N v_{\rm ext}(\mathbf{r}_j,t)$, and $\hat{V}_{\rm ee}$ represent the kinetic energy operator, the sum of nuclear attraction potential and the time-dependent external perturbation, and the electron-electron repulsion potential, respectively.
The spin-orbitals are assumed to obey a single-particle TDSE expressed by $$ \left[ i \hbar \frac{\partial }{\partial t} -\left( - \frac{\hbar^2}{2m_{\rm e}} \frac{\partial^2 }{\partial {\mathbf{r}}^2} + v_{\rm eff}(\mathbf{r},t) \right) \right] \phi_k(x,t) = 0. $$ where $x=(\mathbf{r},\sigma)$ denotes the spatial and spin-coordinates of an electron, and $v_{\rm eff}(\mathbf{r},t)$ is the effective potential to be calculated.
We define an effective Hamiltonian for the relevant system as $$ \hat{H}_{\rm eff}(t) = \hat{T} + \sum_{j=1}^N v_{\rm eff}(\mathbf{r}_j,t) = \hat{T} + \hat{V}_{\rm eff}. $$ The effective potential is formulated by using McLachlan's minimization principle in which the difference of the time-evolution of the wave function $\Psi(x_1,x_2,\cdots,x_N,t)$ is minimized between the TDSEs specified by $\hat{H}(t)$ and $\hat{H}_{\rm eff}(t)$.
We report the detailed theoretical analysis of the properties of the effective potential associated with an exact wave function.
Furthermore, as an elementary application of the present formalism, we propose a direct method to calculate the so-called Brueckner orbitals [7] as a special solution of a set of spin-orbitals calculated as eigenfunctions for a single-particle Schrödinger equation specified by a time-independent effective potential $v_{\rm eff}(\mathbf{r})$ that is associated with an exact ground-state wave function [8].
Also, the relationship between the present effective potential and the Slater's effective potential will be clarified [9].
1. For example, T. Kato and H. Kono, Chem. Phys. Lett. 392 (2004) 533-540.
2. K.L. Ishikawa and T. Sato, IEEE J. Sel. Topics Quantum Electron. 21 (2015) 8700916-1-16.
3. H. Miyagi and L.B. Madsen, Phys. Rev. A 87 (2013) 062511-1-12.
4. T. Sato and K. L. Ishikawa, Phys. Rev. A 91 (2015) 023417-1-15.
5. E. Lötstedt, T. Kato, and Y. Yamanouchi, J. Chem. Phys. 144 (2016) 154116-1-13.
6. H. Wang and M. Thoss, J. Chem. Phys. 131 (2009) 024114-1-14.
7. R.K. Nesbet, Phys. Rev. 109 (1958) 1632-1638.
8. P.O. Löwdin, J. Math. Phys. 3 (1962) 1171-1184.
9DJ.C. Slater, Phys. Rev. 91 (1953) 528-530.
(Conference Room San Felipe)
15:00 - 15:30 Szczepan Chelkowski: Beyond-dipole approximation effects in photoionization: importance of the photon momentum
In most of the past studies of processes involving interaction of lasers with atoms and molecules the tiny photon momentum has not been taken into account nor the issue of momentum sharing between a photoelectron and an ion has not been addressed despite the fact than when intense lasers are used a huge amount of infrared photons are absorbed. This situation has been related to the fact that in most theoretical investigations the dipole approximation has been used for description of the photoionization processes. In this talk I emphasize the importance of using the non-dipole approaches in description of the interaction of intense lasers with atoms and molecules. I will review some surprising results obtained by us using numerical solutions of the time-dependent Schroedinger equation in [1-3] and present new results related to the photon-momentum effect using counter-propagating pulses and the specific non-dipole effects in diatomic molecules.
[1] S. Chelkowski, A.D. Bandrauk, and P.B. Corkum, Phys.Rev.Let. 113, 263005 (2014).
[2] S. Chelkowski, A.D. Bandrauk, and P.B. Corkum, Phys.Rev. A 92, 051401 (R) (2015).
[3] S. Chelkowski, A.D. Bandrauk, and P.B. Corkum, Phys.Rev. A 95, 053402 (2017).
(Conference Room San Felipe)
16:00 - 17:00 Anthony Starace: Applications of Elliptically-Polarized, Few-Cycle Attosecond Pulses
Use of elliptically-polarized light opens the possibility of investigating effects that are not accessible with linearly-polarized pulses. This talk presents new physical effects that are predicted for ionization of the helium atom by few-cycle, elliptically-polarized attosecond pulses. For double ionization of He by an intense elliptically-polarized attosecond pulse, we predict a nonlinear dichroic effect (i.e., the difference of the two-electron angular distributions in the polarization plane for opposite helicities of the ionizing pulse) that is sensitive to the carrier-envelope phase, ellipticity, peak intensity I, and temporal duration of the pulse [1]. For single [2,3] and double ionization [4] of He by two oppositely circularly-polarized, time-delayed attosecond pulses, we predict that the photoelectron momentum distributions in the polarization plane have helical vortex structures that are exquisitely sensitive to the time-delay between the pulses, their relative phase, and their handedness [2-4]. These effects manifest the ability to control the angular distributions of the ionized electrons by means of the attosecond pulse parameters. Our predictions are obtained numerically by solving the two-electron time-dependent Schrödinger equation for the six-dimensional case of elliptically-polarized attosecond pulses. They are interpreted analytically by means of perturbation theory analyses of the two ionization processes.
*This work is supported in part by the U.S. Department of Energy (DOE), Office of Science, Basic Energy Sciences (BES), Award No. DE-FG03-96ER14646.
[1] J.M. Ngoko Djiokap, N.L. Manakov, A.V. Meremianin, S.X. Hu, L.B. Madsen, and A.F. Starace, “Nonlinear dichroism in back-to-back double ionization of He by an intense elliptically-polarized few-cycle XUV pulse,” Phys. Rev. Lett. 113, 223002 (2014).
[2] J.M. Ngoko Djiokap, S.X. Hu, L.B. Madsen, N.L. Manakov, A.V. Meremianin, and A.F. Starace, “Electron Vortices in Photoionization by Circularly Polarized Attosecond Pulses,” Phys. Rev. Lett. 115, 113004 (2015).
[3] J.M. Ngoko Djiokap, A.V. Meremianin, N.L. Manakov, S.X. Hu, L.B. Madsen, and A.F. Starace, “Multistart Spiral Electron Vortices in Ionization by Circularly Polarized UV Pulses,” Phys. Rev. A 94, 013408 (2016).
[4] J.M. Ngoko Djiokap, A.V. Meremianin, N.L. Manakov, S.X. Hu, L.B. Madsen, and A.F. Starace, “Kinematical Vortices in Double Photoionization of Helium by Attosecond Pulses,” Phys. Rev. A (accepted 19 June 2017, in press).
(Conference Room San Felipe)
17:00 - 17:30 Simon Neville: Studying Photochemical Processes Using the Ab Initio Multiple Spawning Method
The nuclear dynamics of molecules following photoexcitation are fundamentally quantum dynamical processes owing to the breakdown of the Born-Oppenheimer approximation in regions close to the intersection of potential energy surfaces. To describe such processes, the solution of the time-dependent Schrödinger equation is necessitated. If large amplitude nuclear motion is involved, then this is a particularly challenging task for conventional gridbased quantum dynamics methods, owing to the need to construct model Hamiltonians that accurately describe multiple potential energy surfaces and the couplings between them over a large subvolume of nuclear configuration space. A powerful alternative is to expand the nuclear wavefunction in terms of localised time-dependent parameterised basis functions, and to exploit this locality to calculate Hamiltonian matrix elements using information at a small number of nuclear geometries. Such methods not only promise to break the curse of exponential scaling suffered by conventional grid-based methods, but also allow for quantum dynamics calculations to be performed ‘on-the-fly’ using information from ab initio electronic structure calculations calculated as and when it is needed. In this talk, I will discuss one such method: the ab initio multiple spawning (AIMS) method. In the AIMS method, the solution of the time-dependent Schrödinger equation is achieved via the expansion of the nuclear wavefunction in an adaptive set of Gaussian basis functions, which is increased in size when needed in order to efficiently describe the transfer of population between electronic states. In the first part of the talk, the theoretical foundations and details of the AIMS method will be discussed. In the second part, representative examples of the study of excited state molecular dynamics will be given, illustrating the power and success of the AIMS method in studying photochemical processes.
(Conference Room San Felipe)
Wednesday, August 16
09:00 - 10:00 Sophie Schirmer: Control of Quantum Spin Devices, feedback control laws and hidden feedback
Networks of interacting spin-1/2 particles form the basis for a wide range of quantum technologies including quantum communication, simulation and computation devices. Optimal control provides methods to steer their dynamics to implement specific quantum operations. It is usually implemented to find optimal time-dependent control fields to implement quantum gates or transformations of quantum states or observables in the context of open-loop quantum control. We recently proposed an alternative approach of static controls, based on shaping the energy landscape of quantum systems. For coupled spin systems this type of control could be realized in terms of spatially distributed gates that introduce energy level shifts using quasi-static local electric or magnetic fields. Although there are insufficient control degrees of freedom for the system to be completely controllable, many practically interesting operations can be implemented using these controls, including efficient transfer of excitations in spin networks. Furthermore, the resulting controllers combine high fidelity and strong robustness properties under device uncertainties, surpassing traditional limits in classical control. In particular, we observe positive correlations between the logarithmic sensitivity and the control error in many cases, i.e., the highest fidelity controllers are also the most robust. Structured singular value analysis shows the same trend for large structured variation using $\mu$-analysis tools.
One way to understand the surprising robustness of the controllers is in terms of feedback control laws. The energy biases create direct feedback loops. Similar to feedback loops in electronic circuits such as operational amplifiers, this feedback is highly effective and does not require measurements.
I will discuss results on energy landscape control for quantum spin devices with a focus on robustness at high fidelities of the operations and the interpretation of the controllers in terms of feedback control laws.
[1] Emergence of Classicality under decoherence in robust quantum transport, S. Schirmer, E. Jonckheere, S. O’Neil, and F. Langbein, in preparation.
[2] Design of Feedback Control Laws for Information Transfer in Spintronics Networks, S Schirmer, E Jonckheere, F Langbein, arXiv:1607.05294, 2016.
[3] Time optimal information transfer in spintronics networks, FC Langbein, S Schirmer, E Jonckheere, 2015 IEEE 54th Annual Conference on Decision and Control (CDC), 6454-58, 5, 2015.
[4] Structured singular value analysis for spintronics network information transfer control, E Jonckheere, S Schirmer, F Langbein, IEEE Transactions on Automatic Control, DOI: 10.109/TAC.2017.2714623, arXiv:1706.03247, in press, 2017.
[5] Jonckheere-Terpstra test for nonclassical error versus log-sensitivity relationship of quantum spin network controllers, E Jonckheere, S Schirmer, F Langbein, arXiv:1612.02784, 2016.
[6] Information transfer fidelity in spin networks and ring-based quantum routers, E Jonckheere, F Langbein, S Schirmer, Quantum Information Processing 14 (12), 4751-4785.
[7] Multi-fractal Geometry of Finite Networks of Spins, P Bogdan, E Jonckheere, S Schirmer, arXiv:1608.08192, 2016.
(Conference Room San Felipe)
10:00 - 10:30 Hector Moya Cessa: Ion-Laser Interactions and the Rabi Model
It will be shown that the Rabi model and the ion laser interaction Hamiltonians may be related via a simmilarity transformation which allows much faster regimes than the ones reached with the rotating wave approximation. This is because the speed of the interaction is dictated the Rabi (intensity of the laser) and our approach does not impose a condition on it.
It will also be shown that the unitary transformation may be done in the time dependent case, and the Rabi model Hamiltonian parameters will depend also on time.
(Conference Room San Felipe)
11:00 - 12:00 Siu Chin: Higher order forward time-step algorithms for solving diverse evolution equations
The well-know pseudo-spectral second-order splitting method has been the work-horse algorithm for solving the time-dependent Schrodinger equation for decades. However, it has been only ~30 years that one learns how to generalize this algorithm to fourth and higher orders. During that period, one also learned about the crucial distinction between solving time-reversible and time-irreversible quantum evolution equations. In this work, I show that there is only one class of fourth-order, forward time step algorithm that can solve both time-reversible and time-irreversible equations with equal efficiency. Other higher order algorithms based on multi-product expansion and complex coefficient will also be mentioned.
(Conference Room San Felipe)
12:00 - 13:30 Lunch (Restaurant Hotel Hacienda Los Laureles)
13:30 - 17:30 Free Afternoon (Oaxaca)
Thursday, August 17
09:00 - 10:00 Neepa Maitra: Capturing Electron-Electron Electron-Ion Correlations in Strong Fields
The electronic system is driven far from its ground state in many applications today: attosecond control and manipulation of electron dynamics and the consequent ion dynamics, photovoltaic design, photoinduced processes in general. Time-dependent density functional theory is a good candidate by which to computationally study such problems. Although it has had much success in the linear response regime for calculations of excitation spectra and response, its reliability in the fully non-perturbative regime is less clear even though it is increasingly used. In the first part of the talk, I will show some of our recent work exploring exact features of the time-dependent exchange-correlation potential that are necessary to yield accurate dynamics and new approaches to develop functionals going beyond the adiabatic approximation. In the second part of the talk, I broaden the focus to the description of coupled electron-ion motion. When the coupling to quantum nuclear dynamics is accounted for, we find additional terms in the potential acting on the electronic subsystem, that fully account for electron-nuclear correlation, and that can yield significant differences to the traditional potentials used when computing coupled electron-ion dynamics.
(Conference Room San Felipe)
10:00 - 10:30 Axel Schild: An Exact Single-Electron Picture of Many-Electron Processes and its Application to the Dynamics of Molecules in Strong Laser Fields
To solve the equation of motion for a quantum-mechanical wavefunction $\psi$ the Schrödinger equation, for a many-particle system is a major problem in many branches of physics. Based on the idea that the joint probability density $|\psi|^2$ can be written as the product of a marginal probability density (that depends only on some of the particle variables) and a conditional probability density, it can be shown that there exists a marginal wavefunction which also obeys a Schrödinger equation, but with an effective potential that encodes the interaction with all parti- cles. In this talk, I present an application of this idea to a many-electron system: By taking only the variables of one electron as the marginal coordinates, an exact single-electron picture that describes the correlated electron dynamics of many electrons is obtained. All many-electron interactions are then encoded in the structure and time-dependence of an effective single-electron potential. This approach is applied to the description of strong field phenomena, because they are often interpreted in a single-electron picture, while at the same time the understanding and measurement of many-electronic interactions is a main topic in this field. First results for 2- and 3-electron model systems in strong laser fields are presented and used to illustrate how an exact single electron picture of ionization or high-harmonic generation looks like. Additionally, I show first steps towards a feasible method for the calculation of the many-electron dynamics in complex molecules.
[1] Axel Schild, E.K.U. Gross, Phys. Rev. Lett. 118, 163202 (2017).
(Conference Room San Felipe)
11:00 - 12:00 Kenneth Lopata: Attosecond Charge Migration with TDDFT: Accurate Dynamics from a Well Defined Initial State
We investigate the ability of time-dependent density functional theory (TDDFT) to capture attosecond valence electron dynamics resulting from sudden X-ray ionization of a core electron. In this special case the initial state can be constructed unambiguously, allowing for a simple test of the accuracy of the dynamics. The response following nitrogen K-edge ionization in nitrosobenzene shows excellent agreement with fourth order algebraic diagrammatic construction (ADC(4)) results, suggesting that a properly chosen initial state allows TDDFT to adequately capture attosecond charge migration. Visualizing hole motion using an electron localization picture (ELF), we provide an intuitive chemical interpretation of the charge migration as a time-dependent superposition of Lewis-dot resonance structures. Coupled with the initial state solution to obtain such dynamics with TDDFT, this chemical picture facilitates interpretation of electron .
(Conference Room San Felipe)
14:00 - 15:00 Turgay Uzer: Using Modern Dynamical Systems Theory to Interpret Your Data
Ending up with a lot of dynamics data ( be it trajectories or wavefunctions) is a common situation we face whenever we are working with real (and sometimes, evenmodel) systems. How can we make sense of these massive data in terms of concepts we understand? In the past two decades I and my team have been using modern dynamical systems theory to uncover simple structures buried under massive trajectory calculations. The power of this tool is due to a very simple property: It allows you to focus on collective behavior of families of trajectories rather than individual ones, thereby helping you to isolate generic behavior. I will give you specific examples from our research which illustrate the use of this powerful tool.
(Conference Room San Felipe)
15:00 - 15:30 Catherine Lefebvre: Non-adiabatic dynamics in graphene controlled by the carrier-envelope phase of a few-cycle laser pulse
We numerically study the interaction of a terahertz pulse with monolayer graphene. We use a numerical solution of the two-dimensional Dirac equation in Fourier space with time-evolution based on split-operator method to describe the dynamics of electron-hole pair creation in graphene. We notice that the electron momentum density is affected by the carrier-envelope phase (CEP) of the few-cycle terahertz laser pulse that induces the electron dynamics. Two main features are observed: (1) interference pattern for any values of the CEP and (2) asymmetry, for non-zero values of the CEP. We explain the origin of the quantum interferences and the asymmetry within the adiabatic-impulse model by finding conditions to reach minimal adiabatic gap between the valence band and the conduction band in graphene. The quantum interferences emanate from successive non-adiabatic transitions at this minimal gap. We discuss how these conditions and the interference pattern are modified by the CEP. This opens the door to control fundamental time-dependent electron dynamics in the tunneling regime in Dirac materials. Also, this suggests a way to measure the CEP of a terahertz laser pulse when it interacts with condensed matter systems.
Joint work C. Lefebvre, F. Fillion-Gourdeau, D. Gagnon and S. MacLean
(Conference Room San Felipe)
16:00 - 17:00 Pablo Arrighi: Quantum walking in curved spacetime
A discrete-time Quantum Walk (QW) is essentially a unitary operator driving the evolution of a single particle on the lattice. Some QWs admit a continuum limit, leading to familiar PDEs (e.g. the Dirac equation). In this paper, we study the continuum limit of a wide class of QWs, and show that it leads to an entire class of PDEs, encompassing the Hamiltonian form of the massive Dirac equation in (1+1) curved spacetime. Therefore a certain QW, which we make explicit, provides us with a unitary discrete toy model of a test particle in curved spacetime, in spite of the fixed background lattice. Mathematically we have introduced two novel ingredients for taking the continuum limit of a QW, but which apply to any quantum cellular automata: encoding and grouping.
(Conference Room San Felipe)
17:00 - 17:30 Pantita Palittapongarnpim: Reinforcement Learning for Robust Adaptive Quantum-Enhanced Metrology
Quantum feedback control is challenging to implement as a measurement on a quantum state only reveals partial information of the state. A feedback procedure can be developed based on a trusted model of the system dynamic, which is typically not available in practical applications. We aim to devise tractable methods to generate effective feedback procedures that do not depend on trusted models. As an application, we construct a reinforcement-learning algorithm to generate adaptive for quantum-enhanced phase estimation in the presence of arbitrary phase noise. Our algorithm exploits noise-resistant differential evolution and introducesan accept-reject criterion. Our robust method shows a path forward to realizing adaptive quantum metrology with unknown noise properties.
(Conference Room San Felipe)
Friday, August 18
09:00 - 10:00 Yongyong Cai: Numerical methods for the Dirac equation in the non-relativistic limit regime
Dirac equation, proposed by Paul Dirac in 1928, is a relativistic version of the Schroedinger equation for quantum mechanics. It describes the evolution of spin-1/2 massive particles, e.g. electrons. Due to its applications in graphene and 2D materials, Dirac equations has drawn considerable interests recently. We are concerned with the numerical methods for solving the Dirac equation in the non-relativistic limit regime, involving a small parameter inversely proportional to the speed of light. We begin with commonly used numerical methods in literature, including finite difference time domain and time splitting spectral, which need very small time steps to solve the Dirac equation in the non-relativistic limit regime. We then propose and analyze a multi-scale time integrator pseudospectral method for the Dirac equation, and prove its uniform convergence in the non-relativistic limit regime. We will extend the study to the nonlinear Dirac equation case.
(Conference Room San Felipe)
10:00 - 10:30 Emmanuel Lorin: Simple digital quantum algorithm for linear first order hyperbolic systems (Conference Room San Felipe)
10:30 - 10:45 Conclusion (Conference Room San Felipe)
11:30 - 13:00 Lunch (Restaurant Hotel Hacienda Los Laureles) |
6c7bc3e83571f341 |
On the physical inconsistency of a new statistical scaling symmetry in incompressible Navier-Stokes turbulence
M. Frewer, G. Khujadze & H. Foysi
Trübnerstr. 42, 69121 Heidelberg, Germany
Chair of Fluid Mechanics, Universität Siegen, 57068 Siegen, Germany
Email address for correspondence:
May 16, 2020
A detailed theoretical investigation is given which demonstrates that a recently proposed statistical scaling symmetry is physically void. Although this scaling is mathematically admitted as a unique symmetry transformation by the underlying statistical equations for incompressible Navier-Stokes turbulence on the level of the functional Hopf equation, by closer inspection, however, it leads to physical inconsistencies and erroneous conclusions in the theory of turbulence.222This present investigation has been peer-reviewed by four independent referees. Their reports have been published in Frewer et al. (2014a).
The new statistical symmetry is thus misleading in so far as it forms within an unmodelled theory an analytical result which at the same time lacks physical consistency. Our investigation will expose this inconsistency on different levels of statistical description, where on each level we will gain new insights for its non-physical transformation behavior. With a view to generate invariant turbulent scaling laws, the consequences will be finally discussed when trying to analytically exploit such a symmetry. In fact, a mismatch between theory and numerical experiment is conclusively quantified.
We ultimately propose a general strategy on how to not only track unphysical statistical symmetries, but also on how to avoid generating such misleading invariance results from the outset. All the more so as this specific study on a physically inconsistent scaling symmetry only serves as a representative example within the broader context of statistical invariance analysis. In this sense our investigation is applicable to all areas of statistical physics in which symmetries get determined in order to either characterize complex dynamical systems, or in order to extract physically useful and meaningful information from the underlying dynamical process itself.
Keywords: Symmetries and Equivalences, Lie Groups, Scaling Laws, Deterministic and Statistical Systems, Principle of Causality, Turbulence, Closure Problem, Boundary Layer Flow;
PACS: 47.10.-g, 47.27.-i, 05.20.-y, 02.20.Qs, 02.50.Cw
1 Introduction
With the aid of today’s modern computer algebra systems, the method of symmetry analysis is one of the most prominent and efficient tools to investigate differential equations arising in various sciences (Ovsiannikov, 1982; Stephani, 1989; Olver, 1993; Ibragimov, 1994; Andreev et al., 1998; Bluman & Kumei, 1996; Meleshko, 2005). A considerable number of special techniques for simplifying, reducing, mapping and solving differential equations have been developed and enhanced so far.
The natural language for symmetry transformations is that of a mathematical group, which either can be discrete or continuous. If an invariant transformation group involves one or more parameters which can vary continuously it is called a Lie symmetry group, named after Sophus Lie who first developed the theory of continuous transformation groups at the end of the nineteenth century (Lie, 1893).
In fact, most differential equations of the sciences possess nontrivial Lie symmetry groups. Under favorable conditions these symmetries can be exploited for various purposes, e.g. performing integrability tests and complete integration of ODEs, finding invariant and asymptotic solutions for ODEs and PDEs, constructing conservation laws and dynamical invariants, etc. Not to forget that Lie-groups are also successfully utilized in the ‘opposite’ direction in modelling dynamical behavior itself, i.e. used for constructing dynamical equations which should admit a certain given set of symmetries. The most impressive results to date were gained by gauge theory for quantum fields (Weinberg, 2000; Penrose, 2005). Hence the existence of symmetries thus has a profound and far-reaching impact on solution properties and modelling of differential equations in general. Their presence very often simplifies our understanding of physical phenomena.
Of particular interest are scaling symmetries as they lead to concepts as scale invariance of dynamical laws or self-similarity of solution manifolds. A scaling symmetry of a physical system can either be associated with a finite dimensional Lie-group (global scaling symmetry) in which all group parameters are strict constants, or with an infinite dimensional Lie-group (local scaling symmetry) in which at least one group parameter is not constant, e.g. by showing a space-time coordinate dependence of the considered system.
Most physical processes, however, only admit global scaling symmetries since the requirement for a local scaling symmetry is too restrictive. In fact, a physical process which admits a local scaling symmetry also admits this symmetry globally. For example, for a local scaling symmetry exhibiting space-time dependent group parameters (which essentially forms the cornerstone of every quantum gauge theory) the corresponding global symmetry is then just given by the same symmetry where only the group parameters are identically fixed at every point in space-time. The opposite, in which a global symmetry automatically implies a local symmetry, is, of course, not the rule.
The purpose of this article is to show that in general caution has to be exercised when interpreting and exploiting symmetries if they act in a purely statistical manner. Although being mathematically admitted as statistical symmetries by the underlying statistical system of dynamical equations, they nevertheless can lead to physical inconsistencies. Without loss of generality, we will demonstrate this issue at the example of a new and recently proposed global statistical scaling symmetry for the incompressible Navier-Stokes equations. Our study and its conclusion can then be easily transferred to any other statistical symmetry within the Navier-Stokes theory, or, more generally, to any other theory within physics which necessitates a statistical description in the thermodynamical sense.
The current study is organized as follows: Section 2 opens the investigation by introducing the single and only continuous (Lie-point) scaling symmetry which the deterministic incompressible Navier-Stokes equations can admit. Although being the only true scaling symmetry, it is yet not the only scaling transformation which leaves these equations invariant when viewed in a broader context. Regarding the class of all possible invariant Lie-point scaling transformations, a brief outline is given to distinguish between the concept of a symmetry transformation and that of an equivalence transformation. A careful distinction between these two concepts is surely necessary in to order fully grasp the spirit of this article.
Section 3 then changes from the deterministic to the statistical description. By choosing the functional Hopf formulation we are dealing with a formally closed and thus complete statistical approach to turbulence. Instead of the weaker invariant class of equivalence transformations, this enables us to generate true statistical symmetry transformations, in particular a new scaling symmetry is considered which first got mentioned in the study of Wacławczyk & Oberlack (2013a).
Section 4 is at the heart of the article’s line of reasoning. It not only demonstrates that the new Hopf scaling symmetry induces a disguised symmetry, which, on a lower level of statistical description, only acts as an equivalence transformation, but also gives a mathematical proof that both the Hopf symmetry and its induced equivalence transformation are essentially unphysical.
Section 5 presents the consequences when generating statistical scaling laws from such a misleading symmetry transformation. These laws will be matched to DNS data at the example of a zero-pressure-gradient (ZPG) turbulent boundary layer flow for the high Reynolds number case of (based on the momentum thickness of the flow). Best curve fits are generated with the aid of using basic tools from statistical data analysis, as the chi-square method to quantitatively measure the quality of the fits relative to the underlying DNS error. As a result, a mismatch between theory and numerical experiment is clearly quantified.
Section 6 concludes and completes the investigation. Theoretically as well as graphically we will conclude that all recently proposed statistical scaling laws which are based on this new unphysical symmetry have no predictive value and, in our opinion, should be discarded to avoid any further misconceptions in future work when generating turbulent scaling laws according to the invariance method of Lie-groups. In a brief historical outline we finally point out that even if this method of Lie-groups in its full extent is applied and interpreted correctly, it nevertheless faces strong natural limits which prevents the effect of achieving a significant breakthrough in the theory of turbulence.
A large but indispensable part of this investigation has been devoted to the appendix. All appendices stand for their own and can be read independently from the main text. In particular Appendix A & C are written in the form of a compendium to serve as an aid and to accompany the reader through the main text. Their purpose is to mathematically support the criticism we put forward in our first part, the theoretical part of our study from Section 2 to Section 4.
2 The deterministic incompressible Navier-Stokes equations
For reasons of simplicity we will in the following only consider the general solution manifold of the incompressible Navier-Stokes equations in the infinite domain without specifying any initial or boundary conditions (Batchelor, 1967; Pope, 2000; Davidson, 2004).
The corresponding deterministic equations can either be written in local differential form as
or equivalently, when using the continuity equation to eliminate the pressure from the momentum equations, in nonlocal integro-differential form as
By construction, equation (2.2) has the property that if the initial velocity field is solenoidal, i.e. if is initially zero, then it will be solenoidal for all times.
The single and only continuous (Lie-point) scaling symmetry which the deterministic incompressible Navier-Stokes equations (2.1), or in the form (2.2), can admit is given by (Olver, 1993; Fushchich et al., 1993; Frisch, 1995; Andreev et al., 1998)
being just a global scaling symmetry with constant group parameter . That (2.3) really acts as a symmetry transformation can be easily verified due its globally uniform structure: By inserting transformation (2.3) into system (2.1), or into (2.2), will leave the equations in each case fully indifferent.
Before we turn in the next section to a complete (fully determined) statistical description of the Navier-Stokes equations, it is essential at this stage to make a careful distinction between two different kinds of invariant transformations. Those being true symmetry transformations and those being only equivalence transformations (Ovsiannikov, 1982; Ibragimov, 1994, 2004).
Although both types of invariant transformations form a Lie-group, they each have a completely different impact when trying to extract valuable information from a given dynamical system. The knowledge of symmetry transformations is mainly used to construct special or general solutions of differential equations, while equivalence transformations are used to solve the equivalence problem for a certain class of differential equations by group theory, that is, to find general criteria whether two or more different differential equations are connected by a change of variables drawn from a transformation group. Hence, the quest for a symmetry transformation is thus fundamentally different to that for an equivalence transformation. The difference between these two kinds of transformations is defined as:
• A symmetry of a differential equation is a transformation which maps every solution of the differential equation to another solution of the same equation. As a consequence a symmetry transformation leads to complete form-indifference of the equation. It results as an invariant transformation if the considered equation is closed.222A set of equations is defined as closed if the number of equations involved is either equal to or more than the number of dependent variables to be solved for.
• An equivalence transformation for a differential equation in a given class is a change of variables which only maps the equation to another equation in the same class. As a consequence an equivalence transformation only leads to a weaker form-invariance of the equation. It results as an invariant transformation either if existing parameters of the considered equation get identified as own independent variables, or if the considered equation itself is unclosed.333A set of equations is defined as unclosed if the number of equations involved is less than the number of unknown dependent variables.
Hence, although both transformations are invariant transformations and both form a Lie-group, they yet lead to different implications. Let us illustrate this decisive difference at two simple examples:
Example 1: By considering the viscosity in (2.1) not as a parameter, but rather, next to the space-time coordinates, as an own independent variable, a detailed invariance analysis will give the following additional scaling group, which in infinitesimal form reads as (Ünal, 1994, 1995)
being an infinite dimensional Lie-group with a group parameter depending on the viscosity variable . Specifying for example will reduce to a finite dimensional subgroup, for which the non-infinitesimal form can then be explicitly determined to
Hence, just by considering the viscosity, or alternatively the Reynolds number , as an own independent variable, we see that next to the global scaling symmetry (2.3) we gained an additional global scaling invariance (2.5): The viscosity as well as the space-time coordinates scale in exactly the same manner respective to the constant group parameter . However, this additional invariant transformation (2.5) does not act as a true symmetry, but only in the weaker sense as an equivalence transformation, in that it only maps the Navier-Stokes equation in the class of different viscosities to another equation in that same class. Indeed, inserting transformation (2.5) into form (2.1), will not leave it form-indifferent, but only form-invariant
since the parametric value changed to . Particularly in this simple case, however, we can alternatively also say that transformation (2.5) actually maps a solution of equation (2.1), with a certain value in viscosity , to another solution of the same equation (2.6), but with a different value in viscosity . Yet, note that irrespective of the functional choice for the continuous group parameter , the invariant transformation (2.4) will never reduce to a true symmetry transformation. Every specific functional choice of will give a different global equivalence scaling transformation.
Example 2: Taking the statistical ensemble average of the deterministic Navier-Stokes equations in the form (2.1), we get, due to the existence of the nonlinear convective term, the following unclosed (underdetermined) set of equations (Pope, 2000; Davidson, 2004)
where the second rank tensor is the unclosed second velocity moment based on the full instantaneous velocity field . In the most general case is to be identified as an unknown and thus arbitrary functional of the space-time coordinates and of the mean fields of velocity and pressure along with its spatiotemporal variations, either in local, nonlocal or mixed form.
For reasons of simplicity let us consider for the moment as an arbitrary function which only shows an explicit dependence on the space-time coordinates, i.e. . If we now perform an invariance analysis of the underdetermined system (2.7), by extending, next to the mean velocity and the mean pressure , the list of dependent variables with the unclosed and thus arbitrary function as an own dependent variable, we immediately gain the following invariant statistical scaling222Note that in the general case a careful distinction must be made between the transformed expression , which directly refers to the transformed mean velocity field, and the transformed expression , which, on the other hand, refers to the transformed instantaneous (fluctuating) velocity field being averaged only after its transformation. However, in the specific and simple case as (2.8) both transformed fields are identical . The obvious reason is that since transformation (2.8) only represents a globally uniform scaling with the constant factor , it will commute with every averaging operator (for a more detailed discussion on this subject, see Appendix D.1).
which globally only scales the system’s dependent variables while the coordinates stay invariant. It is clear that this invariant transformation cannot act as a symmetry transformation. It can only act in the weaker sense as an equivalence transformation, since in the considered functional class of arbitrary second moment functions it only maps the unclosed first moment equation (2.7) into another equation of the same class:
where the unclosed and thus arbitrary function itself gets mapped to a new and different, but still unclosed and thus arbitrary function . However, since is from the same considered functional class as , it thus also exhibits an explicit dependence only on the coordinates: .
Again, the invariant transformation (2.8) only represents an equivalence and not a symmetry transformation of the unclosed system (2.7), since it turns this system only into an equivalent but not identical form. To see this explicitly, imagine we would specify the unclosed moment function , say by
Then according to (2.8) the transformed moment is defined or given by
Hence, while system (2.7) turns into the closed form
the transformed system (2.9), according to (2.11), will turn into
which obviously, due to the explicit factor , is not identical to the corresponding untransformed differential system (2.12). Instead, we can only say that system (2.13) is equivalent to system (2.12) in that they originate from the same class of functions and which both only show an explicit dependence on the coordinates.
This of course stands in strong contrast to any given symmetry transformation of a closed system. For example, the scaling symmetry (2.3) of the deterministic Navier-Stokes equations (2.1), which, if we would specify a certain solution and , it will be mapped according to (2.3) to another solution and of the same and thus to (2.1) identical equation:
Furthermore, the statistical symmetry
which corresponds to (2.3) when reformulated for the mean fields up to the second velocity moment, leaves only the unclosed system (2.7) invariant, but not the specified closed system (2.12). That means that the specification (2.10) is not compatible with the statistical symmetry (2.15), thus showing that the specific functional choice (2.10) on the averaged level is inconsistent to the underlying deterministic (fluctuating) level (2.1). In strong contrast to the statistical equivalence transformation (2.8) which is compatible to both the unspecified system (2.7) and the specified system (2.12).
This explicit demonstration clearly shows that a Lie symmetry transformation induces a far more stronger invariance than a Lie equivalence transformation. Hence, the consequences which can be drawn from a symmetry transformation are by far more richer than for any equivalence transformation.
Three things should be noted here. Firstly, since the transformation (2.8) only scales the system’s dependent variables by keeping the coordinates invariant, it is a typical scaling invariance which only linear systems can admit. Indeed, due to the identification of the unclosed function as an own dependent variable, we turned the underdetermined statistical system (2.7) formally into a linear set of equations. As we will discuss in more detail in the next sections, such an identification is misleading, since it is hiding essential information about the underlying deterministic theory. In other words, although transformation (2.8) correctly acts as a mathematical equivalence transformation for the statistical system (2.7), we will demonstrate that it nevertheless leads to a physical inconsistency.
Secondly, the type and particular structure of an equivalence transformation strongly depends on the explicit variable dependence of itself. Allowing for various different functional dependencies, as e.g. for , or more generally for where denotes the collection of functions together with all their derivatives up to order , can cause different equivalence groups in each case (Meleshko, 1996; Ibragimov, 2004; Bila, 2011; Chirkunov, 2012).
Thirdly, the equivalence transformation (2.8) given in this example has a much weaker impact when trying to extract information from the solution manifold of its underlying dynamical set of equations than the equivalence transformation (2.5) given in the previous example. In contrast to (2.5), which at least could map between specific solutions of different viscosity, the equivalence transformation (2.8) is completely unable to map between specific solutions. The reason is that the considered system of equations is unclosedand thus underdetermined, however not arbitrarily, but in the specified sense that the unclosed term can be physically and uniquely determined from the underlying but analytically non-accessible deterministic velocity field . In other words, this circumstance, in having an underlying theory from which the unclosed term physically emerges, opens the high possibility that physical solutions get mapped into unphysical ones when employing an equivalence transformation as (2.8). This problem will be discussed next.
2.1 The concept of an invariant solution
In order to understand and recognize the subtle difference between a symmetry and an equivalence transformation in its full spectrum, we will discuss this difference again, however, from a different perspective, from the perspective of generating invariant solutions.
First of all, one should recognize that the Lie algorithm to generate invariant transformations for differential equations can be equally applied in the same manner without any restrictions to under-, fully- as well as overdetermined systems of equations (Ovsiannikov, 1982; Stephani, 1989; Olver, 1993; Ibragimov, 1994; Andreev et al., 1998; Bluman & Kumei, 1996; Meleshko, 2005), even if the considered system is infinite dimensional (Frewer, 2015a, b). However, only for fully or overdetermined systems these invariantLie transformations are called and have the effect of symmetry transformations, while for underdetermined systems these invariant Lie transformations are called and have the effect of equivalence transformations.
In other words, although both a symmetry as well as an equivalence transformation form a Lie-group which by construction leave the considered equations invariant, the action and the consequence of each transformation is absolutely different. While a symmetry transformation always maps a solution to another solution of the same equation, anequivalence transformation, in contrast, generally only maps a possible solution of one
underdetermined equation to a possible solution of another underdetermined equation, where in the latter case we assume of course that a solution of an underdetermined equation can be somehow constructed or is somehow given beforehand.
Now, it is clear that if for an unclosed and thus underdetermined equation, or a set of equations, the unclosed terms are not correlated to an existing underlying theory, then the construction of an invariant solution will only be a particular and non-privileged solution within an infinite set of other possible and equally privileged solutions. But if, on the other hand, the unclosed terms are in fact correlated to an underlying theory, either in that they underly a specific but analytically non-accessible process or in that they show some existing but unknown substructure, then the construction of an invariant solution is misleading and essentially ill-defined, in particular if no prior modelling assumptions for the unclosed terms are made. To follow this conclusion in more detail we refer to Appendix A for an extensive discussion on this subject.
Hence, for an unclosed and thus underdetermined system of equations either infinitely many and equally privileged solutions (including all possible invariant solutions) can be constructed, or, depending on whether the unclosed terms are correlated to an underlying but analytically non-accessible theory as turbulence, no true solutions and thus also no true invariant solutions can be determined as long as no prior modelling procedure is invoked to close the system of equations. Therefore, since closed systems do not face this problem, the construction of invariant solutions from symmetry transformations is well-defined, while for equivalence transformations, which are admitted by unclosed systems, the construction of invariant solutions is misleading and can be even ill-defined as in the statistical theory of turbulence. Thus using for example the equivalence transformation (2.8) to generate a privileged statistical invariant solution for the unclosed system (2.7) is basically ill-defined, if no prior modelling assumptions for the underlying substructure of is made to close the equations (see first part of Appendix A.2).
However, if nevertheless within the theory of turbulence such invariant results are generated, they must be carefully interpreted as only being functional relations or functional complexes which stay invariant under the derived equivalence group, and not as being privileged solutions of the associated underdetermined system, as done, for example, in Oberlack & Günther (2003); Khujadze & Oberlack (2004); Günther & Oberlack (2005); Oberlack et al. (2006); Oberlack & Rosteck (2010); She et al. (2011); Oberlack & Zieleniewicz (2013); Avsarkisov et al. (2014) and Wacławczyk et al. (2014). In all these studies the underlying statistical system of dynamical equations is unclosed and thus underdetermined, however, not arbitrarily underdetermined, but underdetermined in the sense that all unclosed terms can be physically and uniquely determined from the underlying but analytically non-accessible instantaneous (fluctuating) velocity field. In particular the system considered in Oberlack & Rosteck (2010), although formally infinite in dimension, reveals itself by closer inspection as such an underdetermined system, for which, as was already said before, the determination of invariant solutions is ill-defined (see last part of Appendix A.2). This study of Oberlack & Rosteck (2010), which serves as a key study for the recent results made in Oberlack & Zieleniewicz (2013); Avsarkisov et al. (2014) and Wacławczyk et al. (2014), will be analyzed in more detail in the next sections.
Important to note is that up to now only in the specific case of homogeneous isotropic turbulence (Davidson, 2004; Sagaut & Cambon, 2008) all those invariant functional complexes which are gained from equivalence scaling groups can be further used to yield more valuable results, in particular the explicit values for the decay rates (Oberlack, 2002), since one has exclusive access to additional nonlocal invariants such as the Birkhoff-Saffman or the Loitsyansky integral. However, for wall-bounded flows it is not clear yet how to use or exploit such invariant functional complexes in a meaningful way, since up to now no additional nonlocal invariants are known.222This aspect also needs to be addressed in Oberlack’s earlier work (Oberlack, 1999, 2001), where also only equivalence transformations were obtained, but which, in addition, were specifically obtained as a result of an incorrect conclusion (Frewer et al., 2014b).
Finally it is worthwhile to mention that for example the work of Khabirov & Ünal (2002a, b) clearly shows in how equivalence transformations within the theory of turbulence can be exploited in a correct manner, which stands in strong contrast to the misleading approach of Oberlack et al. The major difference to the Oberlack et al. approach is that in Khabirov & Ünal (2002a, b) the invariant functions for the unclosed term (which are generated within different optimal Lie subalgebras for all possible Lie-point equivalence transformations of the unclosed Kármán-Howarth equation) are not identified as true solutions of the underlying unclosed equation itself, but, instead, are identified as possible model terms which then in each case consequently leads to a closed model equation. This is done in Khabirov & Ünal (2002a), while in Khabirov & Ünal (2002b) these closed Kármán-Howarth model equations are then solved in each case by the now well-defined technique of invariant solutions, which Khabirov & Ünal (2002a, b) then call physical invariant solutions. Of course, in how far these solutions then describe reality must be checked in each case by experiment or DNS. But that’s a different problem!
We want to close this section by giving a citation from Khabirov & Ünal (2002b) which exactly describes the behavior and effect of an equivalence transformation when trying to exploit it in order to gain insight into the solution manifold of an unclosed and thus underdetermined equation: “Equivalence transformations may affect the behavior of solutions in physical sense. In other words, they may transform physical solutions into unphysical ones. But inverse equivalence transformations may act better in physical sense. These properties of the equivalence transformations will be made use of in the sequel.”
3 A complete statistical description: The Hopf equation
In order to determine new statistical symmetry transformations, and not equivalence transformations, we have to operate within a framework which offers a complete and fully determined statistical description of Navier-Stokes turbulence. Any statistical description which is not formally closed, that is, every statistical description which from the outset would involve unclosed and thus arbitrary functionals, is not suited for this purpose. As was shown in the previous section (Example 2), every invariance analysis would then only generate very weak equivalence transformations.
Currently there are only two statistical approaches to incompressible and spatially unbounded Navier-Stokes turbulence which independently offer a complete and fully determined statistical description. Both approaches formally circumvent the explicit closure problem of turbulence in that they not only overcome the local differential framework in favor of a consistent nonlocal integral framework, but also in that they operate on a higher statistical level which goes beyond the level of the statistical moments. In each case the consequence is a linearly infinite but formally closed statistical approach.
These two approaches are the Lundgren-Monin-Novikov chain of equations (Lundgren, 1967; Monin, 1967; Friedrich et al., 2012) and the Hopf equation (Hopf, 1952; McComb, 1990; Shen & Wray, 1991). While the former operates on the high statistical level of the probability density functions for the -point velocity moments
the Hopf equation operates on the even higher level of the probability density functionals for these moments (3.1). As shown in Monin (1967), the Lundgren-Monin-Novikov system is just the discrete version of the functional Hopf equation. The former is iteratively given as an infinite but fully determined hierarchy of linearly coupled equations, while the latter is given as a single fully determined linear functional equation of infinite dimension. Since in both cases no arbitrary functions are involved, they both can be formally identified as closed systems.
To note is that a third statistical approach exists, which also leads to a linearly infinite hierarchy of equations, the so-called Friedmann-Keller chain of equations (Monin & Yaglom, 1971), which, in contrast to the other two approaches, operates directly on the lower level of the -point velocity moments (3.1). This chain can either be formulated in the local differential framework, as presented in Oberlack & Rosteck (2010) and also recently in Wacławczyk et al. (2014), or in the nonlocal integral framework as presented in Fursikov (1999) and re-derived in Appendix B.
However, in contrast to the Lundgren-Monin-Novikov chain or the Hopf equation, the Friedmann-Keller chain is not closed, not even in a formal sense. This matter is extensively discussed in Appendix C. Irrespective of the analytical framework and in the sense as explained in detail in Appendix C, the Friedmann-Keller chain always involves more unknown functions than determining equations. For both the integral framework as presented in Appendix B, as well as for the differential framework as presented in Oberlack & Rosteck (2010) and Wacławczyk et al. (2014), this can be easily confirmed by just explicitly counting the number of equations versus the number of functions to be determined. In this sense the Friedmann-Keller chain, although infinite in dimension, does not serve as a fully determined statistical description of Navier-Stokes turbulence. Any invariance analysis performed upon this chain will only generate the weaker class of equivalence transformations, simply because the chain is always permanently underdetermined and thus involving arbitrary functions.
Now, in order to prove our statement that a new statistical scaling symmetry is physically inconsistent with the underlying deterministic Navier-Stokes equations (2.2), either the Lundgren-Monin-Novikov chain or Hopf equation can be used. They are equivalent in so far as they both lead to the same conclusion. However, to prove this statement in the next section as efficiently as possible, we will only choose the functional Hopf-approach.
The functional Hopf equation (HEq)
describes the dynamical evolution of the characteristic or moment-generating functional
which is the functional Fourier transform (Klauder, 2010; Kleinert, 2013) of the probability density functional for the velocity field sampled for each time step in infinitely non-denumerable (continuum) number of points , which itself plays the role of a continuous index inside the functionals and , but nonetheless still to be interpreted next to the coordinate and the field as an own independent and active variable in the underlying dynamical equation (3.2). In other words, both functionals and do not explicitly depend on , i.e. in equation (3.2) the variable only appears implicitly in the dependent variable upon which the coordinate operators can then act on. The functional variable , however, is an arbitrary but real, integrable and time-independent solenoidal external source function with vanishing normal component at the (infinite far) boundary. In order to guarantee for physical consistency, a mathematical solution of the Hopf equation (3.2) is only admitted if for all times the following conditions are fulfilled
which stem from the fact that the probability density functional is real, non-negative, and normalized to one in sample space, i.e. , with . This then defines the (infinite) physical dimension of the probability density functional as with , while the characteristic functional is dimensionless.
Finally note that the above-presented functional integration element for the Fourier transform (3.3) is symmetrically defined as the following infinite product of one-dimensional integrals over at every point for every component and coordinate (see e.g. Kleinert (2013), Chapter 13):
Hereby, space-time is grated into a fine equidistant lattice, where for every coordinate the following discrete lattice points were introduced
with a very small lattice spacing .
4 The new statistical scaling symmetry and its inconsistency
It is straightforward to recognize that the linear Hopf equation (3.2) admits the following (functional) Lie-point scaling symmetry
where is a globally constant and real group parameter. This invariance first got mentioned in Wacławczyk & Oberlack (2013a). Note that symmetry is only compatible with the first two physical constraints given in (3.4). The third, non-holonomic constraint in (3.4) gets violated if the characteristic functional is generally transformed as (4.1).
But, if the values of the group constant are restricted to then symmetry (4.1) is fully compatible with all three physical constraints (3.4). However, by restricting the values of to , the symmetry (4.1) turns into a semi-group since no inverse element can be defined or constructed anymore. In other words, the third constraint in (3.4) breaks the Lie-group structure of the symmetry (4.1) down to a semi-group.
The connection of the moment generating functional (3.3) to the multi-point velocity correlation functions (3.1) is given as (Hopf, 1952; McComb, 1990; Shen & Wray, 1991)
By inserting (4.1) into the above functional relation (4.2) of the transformed domain
we can see how the symmetry transformation (4.1) induces the following invariant transformation for the -point velocity correlation functions (3.1)
which for the first time was derived in Khujadze & Oberlack (2004) as a “new statistical symmetry” of Navier-Stokes turbulence. For their derivation they however only considered the unclosed multi-point equations for the velocity moments up to order in the limit of an inviscid parallel shear flow in ZPG turbulent boundary layer flow222In Khujadze & Oberlack (2004) the iterative sequence of the “new statistical scaling symmetry” (4.4) begins only from onwards, i.e only for all . The transformation for is excluded, i.e. the mean velocity stays invariant., while recently in Oberlack & Rosteck (2010) this result (4.4) was re-derived most generally without any flow restrictions by using the full infinite chain of Friedmann-Keller equations. In both derivations the statistical invariance analysis was performed in the local differential framework based on the corresponding deterministic form (2.1), in which the pressure field is explicitly present. Thus in both derivations their results also include, next to the -point velocity moments , all invariant transformations for the velocity-pressure correlations. Of course, these correlations are not part of our result given here, since we derived (4.4) from the Hopf equation (3.2) which is based on the underlying nonlocal deterministic integral form (2.2), in which the pressure field has been consistently eliminated from the dynamical equations.
Indeed, it can be easily verified that transformation (4.4) is admitted as an invariant transformation also by the nonlocal integro-differential Friedmann-Keller equations
which are defined and derived in Appendix B. However, as noted in Appendix B and discussed in more detail in Appendix C, the invariant transformation (4.4) does not act as a symmetry transformation, but only in the weaker form as an equivalence transformation. The reason is that the hierarchy (4.5) forms an unclosed system. The still missing transformation rule for the unclosed -point function of -th moment , which formally stands for , is then dictated by the given transformation rule (4.4) for the corresponding -point function as
It is clear that this simple transformation rule (4.6) is only due to the global and uniform nature of (4.4), in which all system variables transform uniformly by the same constant scaling exponent and independent from the coordinates, which themselves stay invariant.
Important to note here is that in Oberlack & Rosteck (2010) the invariant transformation (4.4) is considered as a true symmetry transformation. However, as already discussed in Section 3 and explained in Appendix C, this claim is not correct. Transformation (4.4) can only act as an equivalence transformation and not as a symmetry transformation. Hence, the invariance analysis performed in Oberlack & Rosteck (2010) is based on equivalence and not on symmetry groups, simply because unclosed and thus arbitrary functions are permanently involved within the considered analysis.
But this insight now has consequences in the interpretation of their newly derived statistical scaling laws, because these laws as presented in Oberlack & Rosteck (2010) may not be interpreted as being privileged solutions of the underlying statistical set of equations as was done therein. They may only be interpreted as being functional relations or functional complexes which stay invariant under the derived equivalence group, nothing more! Therefore these new relations derived in Oberlack & Rosteck (2010) only possibly but not necessarily can serve as useful turbulent scaling functions. Moreover, a comparison to DNS results reveals that these new statistical scaling laws presented in Oberlack & Rosteck (2010) are unphysical as they clearly fail to fulfil the most basic predictive requirements of a scaling law. For ZPG turbulent boundary layer flow this investigation is presented and further discussed in Section 5.
The reason for this failure is twofold: Next to the reason just discussed, that the invariance analysis in Oberlack & Rosteck (2010) was performed upon an underdetermined statistical system which cannot admit true invariant solutions without establishing a correct link to the underlying deterministic equations, the second and more stronger reason is that the symmetry (4.1) itself is unphysical. This physical inconsistency of course transfers down to (4.4), as it is induced by . This will also explain why on the higher level of the probability density functionals a true symmetry transformation, such as (4.1), only induces an equivalence transformation, such as (4.4), and not a corresponding symmetry transformation on the lower level of the -moment functions .
Before we proceed with the proof, it is worthwhile to see that when considering the chain only up to the second moment (), the general equivalence transformation (4.4) will reduce in the smooth limit of zero correlation length to the equivalence transformation (2.8) discussed in Section 2:
where, due to the eliminated pressure field in the underlying Hopf equation, the missing transformation rule for the mean pressure field in (4.4) is consistently dictated by the mean solenoidal velocity field through the one-point momentum equation (2.7) as given in (2.8). Hence, the above mentioned and still to be proven physical inconsistency of (4.1) will thus even fully transfer down to (2.8).
4.1 Proof of the physical inconsistency of symmetry
The physical inconsistency of symmetry (4.1) can be readily observed when connecting the averaged (statistical) level back to the fluctuating (deterministic) level. For the Hopf equation (3.2) the transition rule in going from the fine-grained (fluctuating) to the coarse-grained (averaged) level is defined by the path integral (3.3), which in each time step sums up all coarse-grained probabilities for all possible realizations in the fine-grained velocity field . Now, in order to see the inconsistency, it is necessary to consider the inverse functional Fourier transform of (3.3) in the transformed domain of (4.1):
By inserting the transformation (4.1) into the right-hand side of (4.8) we get
which is the corresponding transformation rule for , if transforms as given in (4.1), where for better readability we suppressed the implicit dependence on both sides. The function is the functional analog of the Dirac -function, and is called the -functional (Klauder, 2010; Kleinert, 2013). In the lattice approximation, corresponding to (3.5) with (3.6), they are defined as an infinite product of ordinary one-dimensional -functions
and thus having the obvious property
Note that since the variables and transform invariantly under (4.1), the velocity field must stay invariant too, otherwise we would loose the definition (3.3) of a functional Fourier transform in the transformed domain.
However, on the other hand, if we insert (4.1) into the functional relation (4.2) in order to explicitly generate the transformation rule for the -point velocity moments, as was already exercised in (4.3), we will get again
which, if using the representation of (3.3), turns into the following relations
By recognizing again the already mentioned fact that the velocity field (along with its continuous index) is an invariant under the considered transformation (4.1), we can replace the variable in (4.13) with and vice versa for all points. Then, by equating in its present and already irreducible form the left-hand side with the right-hand side for each order , we obtain from (4.13) the transformation relation222Note that a local relation can only be identified correctly from an integral relation if it’s formulated irreducibly, i.e., in a form such that it cannot be reduced or simplified any further.
which is in conflict with the previously found transformation rule (4.9) for , i.e. there is no unique transformation rule for the probability density functional . Consequently, via the fine- to coarse-grained transition rule (4.8) the symmetry transformation (4.1) induces an inconsistency. This conflict, however, can only be solved if , but which then turns the symmetry transformation (4.1) into a trivial identity transformation.
Worthwhile to note is that by physical intuition alone one already can recognize this conflict just by solely observing relation (4.9) more closely. Because, since the variables , and transform invariantly under (4.1) we can identically write the transformation rule (4.9) also as
which states that although the system on the fine-grained level stays unchanged, it nevertheless undergoes a global change on the coarse-grained level, which is completely unphysical and not realized in nature.
We thus have a classical violation of cause and effect, as the system would experience an effect (change in averaged dynamics) without a corresponding cause (change in fluctuating dynamics). Note that the opposite conclusion is not the rule, i.e. a change on the fluctuating level can occur without inducing an effect on the averaged level. A macroscopic or coarse-grained (averaged) observation might be insensitive to many microscopic or fine-grained (fluctuating) details, a property of nature widely known as universality (see e.g. Marro (2014)). For example, a high-level complex coherent turbulent structure, though a consequence of the low-level fluctuating description, does not depend on all its details on its lowest level. The opposite again, however, is not realized in nature, i.e., stated differently, if the coherent structure experiences a global change, e.g. in scale or in a translational shift, it definitely must have a cause and thus must go along with a corresponding change on the lower fluctuating level — see also the discussions, e.g., in Frewer et al. (2015, 2016a).
Exactly this non-physical behavior can also be independently observed in the induced transformation rule (4.4) for the -point velocity moments (3.1). It can either be exposed directly on the fluctuating level as an unphysical equivalence transformation, or indirectly on the averaged level as a superfluous or artificial equivalence transformation. In any case, on each level we will gain different insights for this non-physical transformation behavior.
Hence, fully detached from the finding that the equivalence transformation (4.4) for the velocity moments is induced by an unphysical symmetry transformation (4.1) on the higher statistical level of the corresponding probability densities, we will now repeat our investigation on the lower statistical level of the velocity moments themselves, by only focussing on the link between (4.4) and the unclosed Friedmann-Keller equations (4.5).
4.2 The unphysical behavior of equivalence on the fluctuating level
In the case of the Friedmann-Keller chain, especially when used in the oversimplified form (4.5), particular care has to be taken when actually performing a systematic invariance analysis on these equations. The problem is that in contrast to the other two statistical approaches, i.e. the Lundgren-Monin-Novikov chain or the Hopf equation, the Friedmann-Keller chain does not naturally come along with additional physical constraints which are necessary in order to reveal the nonlinear and nonlocal connection between all constituents (see Appendix C).
This circumstance can easily lead to misleading results, as it is the case for (4.4). Here it is necessary to recognize that (4.4) is an invariant scaling transformation which only linear systems can admit, since only the system’s dependent variables get uniformly scaled, while the coordinates themselves stay invariant. Indeed, the corresponding dynamical system which admits (4.4) is the Friedmann-Keller chain of equations (4.5), which is a linear system, since and are both linear operators (see Appendix B).
However, this result, that the hierarchical system (4.5) admits (4.4) as an equivalence transformation, is misleading, since it suggests that all correlation functions scale uniformly with the same scaling factor, which really is not the case as the underlying theory dictates a nonlinear correlation between all these quantities. The problem clearly lies in the notation: Using a formal symbol as , where only an external index allows to distinguish between different multi-point functions, hides the actual underlying correlation information among them. In this sense the notation used in equation (4.5) is counterproductive from the perspective of an analysis on invariance, in that it oversimplifies the physical situation by representing the dynamics for the -point velocity correlations as a linear PDE system, while it is actually based on a nonlinear theory.222Note that also the Hopf equation, and its discrete version, the Lundgren-Monin-Novikov equations are linear systems, but at the expense of operating on a higher statistical level than the moments of the Friedmann-Keller chain of equations, which, by definition, are all uniquely correlated in a nonlinear manner.
It is this misleading aspect which was not recognized and taken into account in Oberlack & Rosteck (2010). That is to say, by explicitly revealing the underlying nonlinear structure behind the formal symbol in (4.5), namely that contains one deterministic velocity field more than , will ultimately break the equivalence scaling (4.4),as will be shown next.
Since the velocity correlation function in (4.4) is nonlinearly built up by multiplicative evaluations of the instantaneous (fluctuating) velocity field according to (3.1), the following chain of reasoning instantly emerges: Since for the averaged function scales as for all points in the domain, the corresponding fluctuating quantity has to scale in the same manner, since every averaging operator is linearly commuting with any constant scale factor. But this implies that any product of fluctuating fields has to scale as , which again implies that also the corresponding averaged quantity then has to scale as . Symbolically the chain reads as
For a detailed explanation of this proof in all its steps, please refer to Appendix D. Conclusion (4.16) clearly demonstrates that if the one-point function globally scales as then the -point function has to scale accordingly, namely as and not as as dictated by (4.4). Only the former scaling will guarantee for an all-over consistent relation between the fluctuating and averaged level of the dynamical Navier-Stokes system. In other words, if a dynamical system experiences a global transformational change on the averaged level then there must exist at least one corresponding change on the fluctuating level (Frewer et al., 2016a). But exactly this is not the case for (4.4) as both and scale therein with the same global factor, for which, thus, a corresponding fluctuating scaling cannot be derived or constructed, neither as a symmetry nor as any regular transformation, meaning that the system experiences a global change on the averaged level with no corresponding change on the fluctuating level — again the classical violation of cause
and effect as was already discussed before. Hence, on the lower statistical level of the velocity moments, too, the physical consistency can only be restored if , i.e. if the equivalence transformation (4.4) gets broken.
To conclude, it should be pointed out that this proof (4.16) clearly shows that the transformation (4.4) itself, i.e. detached from any transport equations, leads to contradictions as soon as one considers more than one point (). However, for , i.e. for the mean velocity itself, no contradiction exits. Only as from onwards, the contradiction starts, which also can be clearly observed when comparing to DNS data as will be demonstrated in Section 5: The mismatch of the corresponding scaling laws which involve this contradictive scaling group (4.4) gets more strong as the order of the moments increases.
Also finally note again that in order to perform the proof (4.16) we basically used the consistency of (the first four lines of (4.16)) to show the inconsistency for all (the remaining lines of (4.16)). Hence, irrelevant of whether (4.4) represents an invariance or not, the transformation itself leads for to contradictions.
4.3 The superfluous behavior of equivalence on the averaged level
The immediate consequence on the averaged level in using an oversimplified statistical representation is that (4.4) will show a superfluous or artificial transformation behavior as soon as one changes to a more detailed representation which reveals more information about the underlying theory. From the perspective of an invariance analysis, it is intuitively clear that changing the statistical description for example to the Reynolds decomposed representation will be superior to the oversimplified notation used in (4.5), as it explicitly will reveal the nonlinearity within the system on the averaged lower level of the moments. Performing a Reynolds decomposition, for example, of the instantaneous 2-point velocity field into its mean and fluctuating part, will thus lead to
This relation explicitly unfolds its nonlinear connection to the one-point velocity fields, where is the corresponding 2-point correlation function for the (zero-mean) fluctuating field evaluated in the points and respectively, while and is the mean velocity evaluated in the same points. Next, the decomposition for the instantaneous 3-point velocity field
will not only nonlinearly connect to 1-point, but also to 2-point functions. This nonlinear connection will then iteratively continue for all higher multi-point functions. Hence, in a bijective, one-to-one manner the equivalence transformation in the oversimplified (linear) representation (4.4) then changes to the following more detailed (nonlinear) representation
where we explicitly expressed the transformation only up to third order in the velocity field for all point-indices in all possible combinations.
In contrast to representation (4.4), the above representation (4.19) of immediately reveals its superfluous or artificial behavior as an equivalence transformation. In (4.19) one can see that the aim of all terms containing the prefactor is to enforce a linear system scaling invariance which attempts to scale all field variables uniformly and independently from its coordinates. But since the underlying Navier-Stokes theory is inherently nonlinear, typical error terms proportional to and then emerge in (4.19) which need to be subtracted accordingly in order to allow for a misleading linear invariance property within a true nonlinear system of moments. In other words, although transformation (4.19) acts as a true equivalence transformation in the correspondingly Reynolds decomposed representation of the instantaneously averaged system (4.5), it acts artificially in that it interprets the nonlinear terms , , etc. as error terms which all are corrected for in order to achieve the desired linear system scaling invariance, but which, as was demonstrated before in (4.16), ultimately cannot exist physically as it induces inconsistencies already on the fluctuating level.
In order to avoid a misconception on this subtle issue, we will repeat the above argumentation again by using different words and by viewing it from a different perspective.
Our claim here is that for the moments the notation as used by Oberlack et al. should not be used when performing an analysis on invariance, because, due to its high notational simplification, it can easily lead to misleading results, in particular when ignoring its connection to the underlying deterministic theory. Careful, our statement is only to be interpreted as a precautionary measure to avoid possible misguidance from the outset. We do not say that the notation is wrong, we just say that it is counterproductive to use, because when working with this oversimplified notation without making a direct connection to the underling deterministic theory, one clearly has a higher risk to produce non-physical results than when working in the classical notation. The self-evident reason is that the oversimplified notation hides essential information of the underlying deterministic Navier-Stokes equations, while the notation, in contrast, reveals it. In other words, when not connecting the notation to the underlying deterministic theory, the notation is physically more transparent and helpful than the mathematically equivalent notation.
Of course, as both notations are just linked by a bijective (one-to-one) mapping, the classical notation is not free of the risk to induce a non-physical result, too. But such a non-physical result will be more easy to track in the detailed notation than in the oversimplified notation, where it’s even not noticeable without properly connecting the notation to the underlying theory. In the notation, however, unphysical results immediately reveal themselves by showing an artificial functional behavior, as in the case of the new unphysical scaling invariance (4.19).
It is clear that since this scaling invariance is unphysical in the notation (4.4) it is also unphysical in the notation (4.19). But in contrast to relation (4.4), the corresponding relation (4.19) immediately indicates that it’s unphysical. To be explicit, let’s consider the new scaling invariance in the notation (4.19) for the one-point correlations up to second order
where is the Reynolds-stress tensor, and compare it to the single and only scaling symmetry of the deterministic Navier-Stokes equations (2.3), which, when transcribed into the statistical form of the notation, will read
Although both (4.20) and (4.21) are mathematically admitted as invariant transformations of the underlying Reynolds-stress transport equations up to second order (Pope, 2000; Davidson, 2004), it is only transformation (4.20) which on this level of description immediately shows an artificial and thus a physically non-useful transformation behavior. Thus, without making a connection to the underlying fluctuating dynamics we already can observe that (4.20) is actually a physically non-useful transformation just by solely inspecting expression (4.20). This is definitely not possible in the oversimplified notation (4.4), and hence, therefore, one has the higher risk of being misguided when using this notation.
The reason why on this level of description (4.20) acts artificially and (4.21) not, is that in order to explicitly scale the values of the Reynolds-stress tensor one has to involve the mean velocity field itself (in the quadratic form ). But such a transformation (4.20) is not in accord with the idea of a Reynolds decomposition which has the intention to study turbulence statistics relative to the mean velocity field . The problem is that since the untransformed Reynolds-stress is quadratically built up by a (zero-mean) fluctuating field which measures the mean stress relative to the mean velocity , the transformed quantity in (4.20) doesn’t have this ‘relative measure’-property anymore because the values are now mixed with mean-velocity values. In other words, within the transformed system the quantity cannot be identified as a Reynolds-stress anymore, which actually should measure the stress relative to the transformed mean velocity .
In this sense, transformation (4.20) is not physically useful, which we directly can also observe when fitting the resulting scaling laws to DNS data (see Section 5). The observed result will be a clear mismatch between theory and experiment, but, as soon as the unphysical structure of transformation (4.20) is excluded or removed from the scaling laws, the matching will improve again by several orders of magnitude which, ultimately, is a clear indication that the scaling (4.20) is unphysical.
Moreover, when returning back to the previously mentioned perspective where the additional scaling terms in (4.20) are only required to restore a misleading linear scaling within a nonlinear theory of moments, the artificial transformation behavior of (4.20) can also be immediately seen when generating invariant functions. Consider the following invariant one-point function of transformation (4.20)
Now, when explicitly performing this invariant transformation
we see how the transformation rule for the Reynolds-stress acts artificially, in that one of its direct aims is to only cancel the disturbing nonlinear terms. Hence, it’s highly questionable whether, and in which sense, the invariant function (4.22) is actually physically relevant, since its corresponding invariant transformation (4.20) is not incorporating the nonlinear terms into the transformation process itself but instead only treats them as ‘error terms’ which must be cancelled accordingly.
Finally, the reader should note that such a superfluous construction is not specific to the Navier-Stokes theory, i.e. the construction principle itself to yield the misleading type of invariance (4.19) is not unique or particular to the Navier-Stokes equations but can be established basically in any statistical system of any nonlinear theory. In other words, the superfluous type of linear scaling invariance (4.19) inherently also exists for example in any unclosed statistical model of the nonlinear Maxwell or the nonlinear Schrödinger equations (see Appendix E), by just reformulating the corresponding expressions accordingly. Hence, if one is not careful enough wrong conclusions will be the general consequence.
In a more general sense we can thus conclude that systematically ignoring any information about the functional structure of an either closed or an unclosed model equation, which is directly linked to an underlying theory in using an oversimplified representa-tion (instead of an appropriate representation which explicitly reveals this information), unawarely allows for generating unphysical and thus useless results when performing an analysis on invariance. This conclusion can be stated as the following general principle:
: For every invariance analysis to be performed on an equation-based model which is linked to an underlying physical theory, it is crucial how the model equations are represented. It is necessary to reveal all information available for the system. If the notation tends to be oversimplified by not revealing all essential information, the analysis runs the risk to generate non-physical results without knowing.
In other words, caution has to be exercised in knowing that mathematical notation, even if formally correct, always has the unfortunate ability to simplify or even oversimplify the actual physical situation and thus causing misguidance, or suggesting an intuition which by closer inspection is not supported.
4.4 An example of a physically consistent statistical scaling symmetry
We want to close Section 4 with a contrasting (positive) example of a statistical scaling symmetry admitted by the Hopf equation (3.2), which not only is compatible with all three constraints (3.4), but which also acts fully consistent on the coarse-grained (averaged) as well as on the fine-grained (fluctuating) level. The symmetry is
which is the only admitted physical (global) scaling symmetry (2.3) of the Navier-Stokes equations just reformulated here for the Navier-Stokes-Hopf equation (3.2).222The ‘official’ theoretical development of extending classical point symmetry analysis from partial to functional differential equations is provided in Oberlack & Wacławczyk (2006), and recently also in Wacławczyk & Oberlack (2013b) adjusting it to Fourier space. However, it still lacks completeness, since the extension is based on an incomplete set of variables, in that the continuous index point (in coordinate space) or (in wavenumber space) are considered as being unchangeable quantities, which is not the case, simply because both variables carry a physical dimension which always, at least, must allow for a (re-)scaling in the units. A clear counter-example is given by (4.24). But also from a pure mathematical perspective, the independent variables or have to be included into the transformation process, even if they at most only act as integration (dummy) variables, nevertheless, their transformational change is always ruled by the Jacobian. Hence, by making sole use of the extended Lie algorithm developed in Oberlack & Wacławczyk (2006) and Wacławczyk & Oberlack (2013b), the fundamental scaling symmetry (4.24) can not be generated and essentially an important symmetry as (4.24) will thus be missed. For more details, we refer to our comments Frewer & Khujadze (2016a); Frewer et al. (2016b) and to our reactions Frewer & Khujadze (2016b); Frewer et al. (2016c), respectively. Note that (4.24) then induces the transformation rule for the
• velocity field: since the exponent inside the kernel of (3.3) should stay invariant, i.e. , in order to consistently define a functional Fourier transform also in the transformed (scaled) domain, the velocity field must scale as
• functional derivative: since the functional derivative carries the physical dimension of the considered field variable per volume, it must scale as
• functional volume element: since for path integrals the measure is of infinite size, it will scale accordingly for each field as
• where it should be noted that the continuous counting index (not the variable itself) stays invariant under transformation (4.24), i.e. , since by any set of real numbers is mapped again in a one-to-one manner to a set of real numbers of the same measure,
• probability density functional: since the physical constraint must stay invariant, it has to scale as
which, in contrast to (4.15), makes an intuitive physical statement, in that if the system experiences a global change in scale on the fine-grained level of type (4.25), then the system will experience this change in scale also on the coarse-grained level accordingly (4.29).
• -point velocity correlation functions: since the construction of all in the Hopf framework are given according to rule (4.2), they scale as
which is the only correct possible scaling behavior for the incompressible Navier-Stokes -point velocity correlation functions. To date, no other statistical scaling symmetry exists!
5 Comparing to DNS results
This section will investigate if all statistical scaling laws which are based on the “new statistical scaling symmetry” (4.4) qualify as useful scaling laws. For geometrically simple wall-bounded flows the general construction principle to generate these laws as “first-principle results” in the inertial region is given in Oberlack & Rosteck (2010, 2011), which recently in Avsarkisov et al. (2014) got extended to include more sophisticated wall-bounded flows.222In Oberlack & Rosteck (2010, 2011) as well as in Avsarkisov et al. (2014) all scaling laws for wall-bounded flows are actually based on two “new statistical symmetries”. Next to the “new scaling symmetry” (4.4) also a “new translation symmetry” is involved, but which, as the scaling symmetry too, turns out to be completely unphysical. This can be easily demonstrated by using the same procedure as shown and developed in this article. Of particular interest are those laws, which according to Oberlack & Rosteck (2010, 2011) should scale all higher order velocity moments beyond the log-law of the mean-velocity profile. The corresponding derivation procedure is revisited in Appendix F. Up to third moment, the explicit functional structure for all one-point velocity moments is derived in (F.10) and given as |
428710d435bee627 | Skip to main content
The shape of flowing water
Time: Thu 2019-09-05 15.15 - 16.30
Lecturer: Tomas Bohr (Technical University of Denmark)
Location: Oskar Klein auditorium FR4, AlbaNova
Abstract. When we observe fluid flows in nature, it is often because we notice the deformation of the fluid surface e.g., when light reflects on a water drop or an ocean wave. Such deformations can have great beauty and complexity, since the shape of the free surface is intimately and very nonlinearly coupled to the internal flow. In the talk, I will show examples of surfaces with shapes of thin needles or sharp walls and that lead to interesting symmetry breaking transitions, where sharp corners and polygonal structures appear - even in strongly turbulent flows. The existence of such structures, even in very “simple” flows, shows the complexity of the solutions to the Navier-Stokes equations with a free surface. Since the work of E. Madelung, it has been known that the Schrödinger equation can also be expressed as a fluid flow, and it has been suggested by Y. Couder and his collaborators that the mysteries of quantum mechanics can be imitated by bouncing droplets moving and interacting through surface waves. I shall discuss this exciting possibility briefly, but argue that the full spectrum of quantum effects cannot be obtained in this way. |
2db5a56b55f92c5c | Wave equations take the form:
$$\frac{ \partial^2 f} {\partial t^2} = c^2 \nabla ^2f$$
But the Schroedinger equation takes the form:
$$i \hbar \frac{ \partial f} {\partial t} = - \frac{\hbar ^2}{2m}\nabla ^2f + U(x) f$$
The partials with respect to time are not the same order. How can Schroedinger's equation be regarded as a wave equation? And why are interference patterns (e.g in the double-slit experiment) so similar for water waves and quantum wavefunctions?
• 1
$\begingroup$ For a connection between Schr. eq. and Klein-Gordon eq, see e.g. A. Zee, QFT in a Nutshell, Chap. III.5, and this Phys.SE post plus links therein. $\endgroup$ – Qmechanic Jul 27 '14 at 17:47
• 5
$\begingroup$ "In what sense is the Schrödinger equation a wave equation?" in a loose sense. Its solutions are intuitively wave-like. From a mathematical point of view, things are not as easy. Standard classifications of PDE's dont accommodate the Schrödinger equation, which kinda looks parabolic but it is not dissipative. It shares many properties with hyperbolic equations, so we can say that it is a wave-equation -- not in the technical sense, but yes in a heuristic sense. $\endgroup$ – AccidentalFourierTransform May 16 '17 at 22:21
• 1
$\begingroup$ I had left a comment on one of the answers below, but then deleted it... I'll post something similar here because it's along the lines of what @AccidentalFourierTransform said. I wouldn't call this equation a wave equation. It's not hyperbolic. Wave-like? Maybe. But I don't think I would try to defend the statement that it's a wave equation. To me, hyperbolic <-> wave equation and anything else is just something else. $\endgroup$ – tpg2114 May 16 '17 at 22:28
• 4
$\begingroup$ A variant on this question - why does double-slit interference produce such similar interference patterns for water waves as for the electron wavefunction, if their underlying differential equations are so different? $\endgroup$ – tparker May 16 '17 at 22:50
• 1
$\begingroup$ @tparker We see that all the time in, say, fluid dynamics. Linear potential equations can generate very similar solutions as the full Navier-Stokes equations under some circumstances despite the vast differences in their underlying equations. But, there are solutions that can't be produced by one or the other. I'm reluctant to say it's all just coincidental, but it's not unheard of that fundamentally different equations can produce similar solutions in a limited number of situations. $\endgroup$ – tpg2114 May 16 '17 at 23:15
Actually, a wave equation is any equation that admits wave-like solutions, which take the form $f(\vec{x} \pm \vec{v}t)$. The equation $\frac{\partial^2 f}{\partial t^2} = c^2\nabla^2 f$, despite being called "the wave equation," is not the only equation that does this.
If you plug the wave solution into the Schroedinger equation for constant potential, using $\xi = x - vt$
$$\begin{align} -i\hbar\frac{\partial}{\partial t}f(\xi) &= \biggl(-\frac{\hbar^2}{2m}\nabla^2 + U\biggr) f(\xi) \\ i\hbar vf'(\xi) &= -\frac{\hbar^2}{2m}f''(\xi) + Uf(\xi) \\ \end{align}$$
This clearly depends only on $\xi$, not $x$ or $t$ individually, which shows that you can find wave-like solutions. They wind up looking like $e^{ik\xi}$.
| cite | improve this answer | |
• 8
$\begingroup$ Doesn't any translationally-invariant PDE satisfy this criterion, even if it isn't rotationally invariant or even linear? $\partial f({\bf \xi}) / \partial x_i = \partial f({\bf \xi}) / \partial \xi_i$ and $\partial f({\bf \xi}) / \partial t = -{\bf v} \cdot {\bf \nabla}_{\bf \xi} f({\bf \xi})$, so if you take any translationally invariant PDE and replace every $\partial / \partial t$ with $-\bf{v} \cdot {\bf \nabla}_{\bf \xi}$, then can't any solution $f({\bf \xi})$ of the resulting 3D PDE be converted into a "wave-like" solution to the original 4D PDE by ... $\endgroup$ – tparker May 8 '17 at 6:35
• 1
$\begingroup$ ... letting $f({\bf \xi}) \to f({\bf x} - {\bf v} t)$? $\endgroup$ – tparker May 8 '17 at 6:35
• 1
$\begingroup$ I've expanded the comment above into an answer. $\endgroup$ – tparker May 10 '17 at 8:28
• 6
$\begingroup$ I disagree. For the wave equation any function f(x-vt) (with correctly fixed v) is a solution. In you Schroedinger example only very special special functions do fulfill the equation. $\endgroup$ – lalala May 17 '17 at 5:19
• $\begingroup$ @lalala ... for non-dispersive waves. However, plenty of other phenomena that your really do want to keep calling 'waves' (like slinky waves, sound in solids, light in glass, or ripples in a pond) no longer support that property: they do have an infinite basis of solutions of the form $e^{i(kx-\omega t)}$, but they no longer sustain $f(x-vt)$ as a solution, exactly in the way that the Schrödinger equation does. "Wave" is a bit of a fluffy term, but if you use that basis to write out the Schrödinger equation, you've got to be prepared to kick out the others. $\endgroup$ – Emilio Pisanty May 25 '17 at 18:32
Both are types of wave equations because the solutions behave as you expect for "waves". However, mathematically speaking they are partial differential equations (PDE) which are not of the same type (so you expect that the class of solutions, given some boundary conditions, will present different behaviour). The constraints on the eigenvalues of the linear operator are also particular to each of the types of PDE. Generally, a second order partial differential equation in two variables can be written as
$$A \partial_x^2 u + B \partial_x \partial_y u + C \partial_y^2 u + \text{lower order terms} = 0 $$
The wave equation in one dimension you quote is a simple form for a hyperbolic PDE satisfying $B^2 - 4AC > 0$.
The Schrödinger equation is a parabolic PDE in which we have $B^2 - 4AC < 0$. It can be mapped to the heat equation.
| cite | improve this answer | |
• 1
$\begingroup$ Shouldn't it be $B^2 - 4AC$ or maybe $2B$ in the PDE? $\endgroup$ – Luzanne May 18 '17 at 15:21
• $\begingroup$ Is there an insightful explanation why Schroedinger equation and heat equation are so similar (actually identical with $U=0$ except for being complex-valued) yet result in such different behavior? $\endgroup$ – divB Apr 4 at 22:40
• 1
$\begingroup$ @divB I think it's similar to how the ODE f'(x) = -k f(x) gives exponential decay (for real k > 0), but add an i before the k and you instead get oscillating solutions like you get for f''(x) = -k f(x) $\endgroup$ – Tim Goodman Jun 6 at 2:20
In the technical sense, the Schrödinger equation is not a wave equation (it is not a hyperbolic PDE). In a more heuristic sense, though, one may regard it as one because it exhibits some of the characteristics of typical wave-equations. In particular, the most important property shared with wave-equations is the Huygens principle. For example, this principle is behind the double slit experiment.
If you want to read about this principle and the Schrödinger equation, see Huygens' principle, the free Schrodinger particle and the quantum anti-centrifugal force and Huygens’ Principle as Universal Model of Propagation. See also this Math.OF post for more details about the HP and hyperbolic PDE's.
| cite | improve this answer | |
• $\begingroup$ Could you elaborate on this? I am trying to understand how the DS experiment can be predicted via solutions to the Schrödinger equation, usually it seems just heuristically done with wave equation solutions but these don't typically solve Schrödinger. Maybe you have a source which explains this? Looking through the sources you posted, they don't discuss the DS experiment. $\endgroup$ – doublefelix Sep 18 at 14:33
As Joe points out in his answer to a duplicate, the Schrodinger equation for a free particle is a variant on the slowly-varying envelope approximation of the wave equation, but I think his answer misses some subtleties.
Take a general solution $f(x)$ to the wave equation $\partial^2 f = 0$ (we use Lorentz-covariant notation and the -+++ sign convention). Imagine decomposing $f$ into a single plane wave modulated by an envelope function $\psi(x)$: $f(x) = \psi(x)\, e^{i k \cdot x}$, where the four-vector $k$ is null. The wave equation then becomes $$(\partial^\mu + 2 i k^\mu) \partial_\mu \psi = ({\bf \nabla} + 2 i\, {\bf k}) \cdot {\bf \nabla} \psi + \frac{1}{c^2} (-\partial_t + 2 i \omega) \partial_t \psi= 0,$$ where $c$ is the wave velocity.
If there exists a Lorentz frame in which $|{\bf k} \cdot {\bf \nabla} \psi| \ll |{\bf \nabla} \cdot {\bf \nabla} \psi|$ and $|\partial_t \dot{\psi}| \ll \omega |\dot{\psi}|$, then in that frame the middle two terms can be neglected, and we are left with $$i \partial_t \psi = -\frac{c^2}{2 \omega} \nabla^2 \psi,$$ which is the Schrodinger equation for a free particle of mass $m = \hbar \omega / c^2$.
$|\partial_t \dot{\psi}| \ll \omega |\dot{\psi}|$ means that the envelope function's time derivative $\dot{\psi}$ is changing much more slowly than the plane wave is oscillating (i.e. many plane wave oscillations occur in the time $|\dot{\psi} / \partial_t \dot{\psi}|$ that it takes for $\dot{\psi}$ to change significantly) - hence the name "slowly-varying envelope approximation." The physical interpretation of $|{\bf k} \cdot {\bf \nabla} \psi| \ll |{\bf \nabla} \cdot {\bf \nabla} \psi|$ is much less clear and I don't have a great intuition for it, but it seems to basically imply that if we take the direction of wave propagation to be $\hat{{\bf z}}$, then $\partial_z \psi$ changes very quickly in space along the direction of wave propagation (i.e. you only need to travel a small fraction of a wavelength $\lambda$ before $\partial_z \psi$ changes significantly). This is a rather strange limit, because clearly it doesn't really make sense to think of $\psi$ as an "envelope" if it changes over a length scale much shorter than the wavelength of the wave that it's supposed to be enveloping. Frankly, I'm not even sure if this limit is compatible with the other limit $|\partial_t \dot{\psi}| \ll \omega |\dot{\psi}|$. I would welcome anyone's thoughts on how to interpret this limit.
| cite | improve this answer | |
As stressed in other answers and comments, the common point between these equations is that their solutions are "waves". It is the reason why the physics they describe (eg interference patterns) is similar.
Tentatively, I would define a "wavelike" equation as
1. a linear PDE
2. whose space of spacially bounded* solutions admits a (pseudo-)basis of the form $$e^{i \vec{k}.\vec{x} - i \omega_{\alpha}(\vec{k})t}, \vec{k} \in \mathbb{R}^n, \alpha \in \left\{1,\dots,r\right\}$$ with $\omega_1(\vec{k}),\dots,\omega_r(\vec{k})$ real-valued (aka. dispersion relation).
For example, in 1+1 dimension, these are going to be the PDE of the form $$\sum_{p,q} A_{p,q} \partial_x^p \partial_t^q \psi = 0$$ such that, for all $k \in \mathbb{R}$ the polynomial $$Q_k(\omega) := \sum_{p,q} (i)^{p+q} A_{p,q} k^p \omega^q$$ only admits real roots. In this sense this is reminiscent of the hyperbolic vs parabolic classification detailed in @DaniH answer, but without giving a special role to 2nd order derivatives.
Note that with such a definition the free Schrödinger equation would qualify as wavelike, but not the one with a potential (and rightly so I think, as the physics of, say, the quantum harmonic oscillator is quite different, with bound states etc). Nor would the heat equation $\partial_t \psi -c \partial_x^2 \psi = 0$: the '$i$' in the Schrödinger equation matters!
* Such equations will also often admit evanescent waves solutions corresponding to imaginary $\vec{k}$.
| cite | improve this answer | |
This answer elaborates on my comment to David Z's answer. I think his definition of a wave equation is excessively broad, because it includes every translationally invariant PDE and every value of $v$. For simplicity, let's specialize to linear PDE's in one spatial dimension. A general order-$N$ such equation takes the form
$$\sum_{n=0}^N \sum_{\mu_1, \dots, \mu_n \in \{t, x\}} c_{\mu_1, \dots, \mu_n} \partial_{\mu_1} \dots \partial_{\mu_n}\, f(x, t) = 0.$$
To simplify the notation, we'll let $\{ \mu \}$ denote $\mu_1, \dots, \mu_n$, so that
$$\sum_{n=0}^N \sum_{\{\mu\}} c_{\{\mu\}} \partial_{\mu_1} \dots \partial_{\mu_n}\, f(x, t) = 0.$$
Let's make the ansatz that $f$ only depends on $\xi := x - v t$. Then $\partial_x f(\xi) = f'(\xi)$ and $\partial_t f(\xi) = -v\, f'(\xi)$. If we define $a_{\{\mu\}} \in \mathbb{N}$ to simply count the number of indices $\mu_i \in \{\mu\}$ that equal $t$, then the PDE becomes
$$\sum_{n=0}^N f^{(n)}(\xi) \sum_{\{\mu\}} c_{\{\mu\}} (-v)^{a_{\{\mu\}}} = 0.$$
Defining $c'_n := \sum \limits_{\{\mu\}} c_{\{\mu\}} (-v)^{a_{\{\mu\}}}$, we get the ordinary differential equation with constant coefficients
$$\sum_{n=0}^N c'_n\ f^{(n)}(\xi) = 0.$$
Now as usual, we can make the ansatz $f(\xi) = e^{i z \xi}$ and find that the differential equation is satisfied as long as $z$ is a root of the characteristic polynomial $\sum \limits_{n=0} c_n' (iz)^n$. So our completely arbitrary translationally invariant linear PDE will have "wave-like solutions" traveling at every possibly velocity!
| cite | improve this answer | |
• 2
$\begingroup$ Interesting point. I'm not ready to admit that the definition is overly broad; maybe all translationally invariant linear PDEs are "wave equations". But I am wondering whether there's more to the story. For example, do some of these PDEs admit other solutions which cannot be expressed as a linear combination of waves $f(x \pm vt)$? $\endgroup$ – David Z May 10 '17 at 19:07
• $\begingroup$ @DavidZ The PDE's don't even have to be linear, as mentioned in my original comment (I just considered the linear case for simplicity). If you allow the phrase "wave equation" to cover general TI nonlinear PDE's, in my opinion it becomes so broad that you might as well just say "translationally invariant PDE." $\endgroup$ – tparker May 10 '17 at 23:02
While it is not very technical in nature it is worth going back to the first definition of what a wave is (that is, the one you use before learning that "a wave is a solution to a wave-equation"). The wording I use in the introductory classes is
a moving disturbance
where the 'disturbance' is allowed to be in any measurable quantity, and simple means that the quantity is seen to vary from its equilibrium value and then return to that value.
The surprising thing is not how general that expression is, but that it is necessary to use something that general to cover all the basic cases: waves on strings, surface waves on liquids: sound and light.
And by that definition Schrödinger's equation is used to describe the moving variation of various observables, so arguably qualifies.
There is room to quibble—the wave-function itself is not an observable, and even the distributions of values that can be observed are often statistical in nature—but I've always been comfortable with this approach.
| cite | improve this answer | |
• $\begingroup$ Yes, I'm starting to regret placing the bounty on this question instead of creating my own. The thing I really want to understand is the much more concrete question of why slit interference patterns look so similar (both qualitatively and quantitatively) for the free-particle Schrödinger equation and for "the" wave equation, even the the differential equations are so mathematically different. $\endgroup$ – tparker May 24 '17 at 2:15
• $\begingroup$ I see. That is an interesting question, but not one I've given a lot of thought to before. A line of inquiry which presents itself is considering the TDSE as the Newtonian approximation to the underlying relativistic, quantum, wave-equations, which have the symmetry between time and space that we see in the "the" wave-equation. Certainly that fits with the usual heuristic picture in which $H = p^2/2m + V(x)$ plus the time derivative of $\Psi_0\exp(kx - \omega t)$ resulting in energy while spacial derivatives result in momentum (to within appropriate constants, of course). $\endgroup$ – dmckee --- ex-moderator kitten May 24 '17 at 2:40
• 1
$\begingroup$ @tparker just to let you know, reading your comments prompted me to ask this related question. physics.stackexchange.com/questions/335225/… $\endgroup$ – CDCM May 25 '17 at 1:06
Your Answer
|
7a155b53e2b272c9 | The Full Wiki
More info on Stimulated emission
Stimulated emission: Quiz
Question 1: Electrons have energy in proportion to how far they are on average from the nucleus of an ________.
Question 2: Specifically, the atom will act like a small electric ________ which will oscillate with the external field.
Electric dipole momentMagnetic fieldMagnetic momentDipole
Question 3: When this happens due to the presence of the electromagnetic field from a ________, a photon is released in the same phase and direction as the "stimulating" photon, and is called stimulated emission.
ElectronPhotonStandard ModelAtom
Question 4: The ________ forces some electrons to be farther from the nucleus than others, which is why all the electrons in an atom do not simply occupy the 1s orbital.
Quantum mechanicsPauli exclusion principleIntroduction to quantum mechanicsSchrödinger equation
Question 5: When such a decay occurs, the energy difference between the level the electron was at and the new level must be released either as a photon or a ________.
Bound statePhononAtomQuasiparticle
Question 6: An external source of energy stimulates atoms in the ground state to transition to the excited state, creating what is called a ________.
LightPhotonPopulation inversionStimulated emission
Question 7: In optics, stimulated emission is the process by which an electron, perturbed by a photon having the correct energy, may drop to a lower ________ level resulting in the creation of another photon.
Question 8: ________ and how they interact with each other and electromagnetic fields form the basis for most of our understanding of chemistry and physics.
Question 9: and the general gain equation approaches a linear ________:
Algebraic geometryAsymptoteConic sectionCurve
Question 10: If the atom is in the excited state, it may decay into the ground state by the process of ________, releasing the difference in energies between the two states as a photon.
Spontaneous emissionStimulated emissionVirtual particlePhonon
Got something to say? Make a comment.
Your name
Your email address |
71f7bd38bdda7fde | Revista Matemática Iberoamericana
Full-Text PDF (638 KB) | Metadata | Table of Contents | RMI summary
Volume 34, Issue 1, 2018, pp. 245–304
DOI: 10.4171/RMI/985
Published online: 2018-02-06
Long wave asymptotics for the Euler–Korteweg system
Sylvie Benzoni-Gavage[1] and David Chiron[2]
(1) Université Claude Bernard Lyon 1, Villeurbanne, France
(2) Université Côte d'Azur, Nice, France
The Euler–Korteweg system (EK) is a fairly general nonlinear waves model in mathematical physics that includes in particular the fluid formulation of the NonLinear Schrödinger equation (NLS). Several asymptotic regimes can be considered, regarding the length and the amplitude of waves. The first one is the free wave regime, which yields long acoustic waves of small amplitude. The other regimes describe a single wave or two counter propagating waves emerging from the wave regime. It is shown that in one space dimension those waves are governed either by inviscid Burgers or by Korteweg–de Vries equations, depending on the spatio-temporal and amplitude scalings. In higher dimensions, those waves are found to solve Kadomtsev–Petviashvili equations. Error bounds are provided in all cases. These results extend earlier work on defocussing (NLS) (and more specifically the Gross–Pitaevskii equation), and sheds light on the qualitative behavior of solutions to (EK), which is a highly nonlinear system of PDEs that is much less understood in general than (NLS).
Keywords: Euler–Korteweg system, capillary fluids, Korteweg–de Vries equation, Kadomtsev–Petviashvili equation, weakly transverse Boussinesq system
Benzoni-Gavage Sylvie, Chiron David: Long wave asymptotics for the Euler–Korteweg system. Rev. Mat. Iberoam. 34 (2018), 245-304. doi: 10.4171/RMI/985 |
445d947dd030ebdf | Hoşgeldiniz, Misafir . Oturum Aç . English
Neredeyim: Ninova / Dersler / Fen-Edebiyat Fakültesi / FIZ 352E - kuvantum fiziği-II
FIZ 352E - kuvantum fiziği-II
Dersin Amaçları
1 - To learn how to apply quantum mechanics to the three dimensional real systems
2 - To understand the spin angular momentum which has no classical counterpart
3 - To understand how the structure of the hydrogen-type atoms can be explained successfully in quantum mechanics
Dersin Tanımı
3 dimensional Schrödinger equation: systems with spherical symmetry. Radial equation. Free particle. Infinite spherical well. Two-particle problem. Hydrogen atom. Stern-Gerlach experiment, spin angular momentum. Differential and matrix representations of operators. Spin- magnetic field interaction. Addition of angular momenta: Clebsch-Gordan coefficients. Identical particles. Particle interchange operator. Pauli principle. N-particle systems. Spin and statistics. Time-independent perturbation theory: first and second order perturbations. Degenerate perturbation theory. Stark effect. Fine structure and hyperfine structure of the hydrogen atom. Zeeman effect. EPR paradox and the principle of locality. Seperability problem and quantum entanglement.
Gönül Eryürek
Dersin Dili
Dersler . Yardım . Hakkında
Ninova, İTÜ Bilgi İşlem Daire Başkanlığı ürünüdür. © 2019 |
dce949836681d734 | Superposition of almost plane waves (diagonal lines) from a distant source and waves from the wake of the ducks. Linearity holds only approximately in water and only for waves with small amplitudes relative to their wavelengths.
Rolling motion as superposition of two motions. The rolling motion of the wheel can be described as a combination of two separate motions: translation without rotation, and rotation without translation.
A function that satisfies the superposition principle is called a linear function. Superposition can be defined by two simpler properties; additivity and homogeneity
for scalar a.
This principle has many applications in physics and engineering because many physical systems can be modeled as linear systems. For example, a beam can be modeled as a linear system where the input stimulus is the load on the beam and the output response is the deflection of the beam. The importance of linear systems is that they are easier to analyze mathematically; there is a large body of mathematical techniques, frequency domain linear transform methods such as Fourier, Laplace transforms, and linear operator theory, that are applicable. Because physical systems are generally only approximately linear, the superposition principle is only an approximation of the true physical behaviour.
The superposition principle applies to any linear system, including algebraic equations, linear differential equations, and systems of equations of those forms. The stimuli and responses could be numbers, functions, vectors, vector fields, time-varying signals, or any other object that satisfies certain axioms. Note that when vectors or vector fields are involved, a superposition is interpreted as a vector sum.
Relation to Fourier analysis and similar methodsEdit
By writing a very general stimulus (in a linear system) as the superposition of stimuli of a specific and simple form, often the response becomes easier to compute.
For example, in Fourier analysis, the stimulus is written as the superposition of infinitely many sinusoids. Due to the superposition principle, each of these sinusoids can be analyzed separately, and its individual response can be computed. (The response is itself a sinusoid, with the same frequency as the stimulus, but generally a different amplitude and phase.) According to the superposition principle, the response to the original stimulus is the sum (or integral) of all the individual sinusoidal responses.
As another common example, in Green's function analysis, the stimulus is written as the superposition of infinitely many impulse functions, and the response is then a superposition of impulse responses.
Fourier analysis is particularly common for waves. For example, in electromagnetic theory, ordinary light is described as a superposition of plane waves (waves of fixed frequency, polarization, and direction). As long as the superposition principle holds (which is often but not always; see nonlinear optics), the behavior of any light wave can be understood as a superposition of the behavior of these simpler plane waves.
Wave superpositionEdit
Two waves traveling in opposite directions across the same medium combine linearly. In this animation, both waves have the same wavelength and the sum of amplitudes results in a standing wave.
Waves are usually described by variations in some parameter through space and time—for example, height in a water wave, pressure in a sound wave, or the electromagnetic field in a light wave. The value of this parameter is called the amplitude of the wave, and the wave itself is a function specifying the amplitude at each point.
In any system with waves, the waveform at a given time is a function of the sources (i.e., external forces, if any, that create or affect the wave) and initial conditions of the system. In many cases (for example, in the classic wave equation), the equation describing the wave is linear. When this is true, the superposition principle can be applied. That means that the net amplitude caused by two or more waves traversing the same space is the sum of the amplitudes that would have been produced by the individual waves separately. For example, two waves traveling towards each other will pass right through each other without any distortion on the other side. (See image at top.)
Wave diffraction vs. wave interferenceEdit
With regard to wave superposition, Richard Feynman wrote:[2]
Other authors elaborate:[3]
The difference is one of convenience and convention. If the waves to be superposed originate from a few coherent sources, say, two, the effect is called interference. On the other hand, if the waves to be superposed originate by subdividing a wavefront into infinitesimal coherent wavelets (sources), the effect is called diffraction. That is the difference between the two phenomena is [a matter] of degree only, and basically they are two limiting cases of superposition effects.
Yet another source concurs:[4]
In as much as the interference fringes observed by Young were the diffraction pattern of the double slit, this chapter [Fraunhofer diffraction] is therefore a continuation of Chapter 8 [Interference]. On the other hand, few opticians would regard the Michelson interferometer as an example of diffraction. Some of the important categories of diffraction relate to the interference that accompanies division of the wavefront, so Feynman's observation to some extent reflects the difficulty that we may have in distinguishing division of amplitude and division of wavefront.
Wave interferenceEdit
The phenomenon of interference between waves is based on this idea. When two or more waves traverse the same space, the net amplitude at each point is the sum of the amplitudes of the individual waves. In some cases, such as in noise-cancelling headphones, the summed variation has a smaller amplitude than the component variations; this is called destructive interference. In other cases, such as in a line array, the summed variation will have a bigger amplitude than any of the components individually; this is called constructive interference.
green wave traverse to the right while blue wave traverse left, the net red wave amplitude at each point is the sum of the amplitudes of the individual waves.
wave 1
wave 2
Two waves in phase Two waves 180° out
of phase
Departures from linearityEdit
In most realistic physical situations, the equation governing the wave is only approximately linear. In these situations, the superposition principle only approximately holds. As a rule, the accuracy of the approximation tends to improve as the amplitude of the wave gets smaller. For examples of phenomena that arise when the superposition principle does not exactly hold, see the articles nonlinear optics and nonlinear acoustics.
Quantum superpositionEdit
In quantum mechanics, a principal task is to compute how a certain type of wave propagates and behaves. The wave is described by a wave function, and the equation governing its behavior is called the Schrödinger equation. A primary approach to computing the behavior of a wave function is to write it as a superposition (called "quantum superposition") of (possibly infinitely many) other wave functions of a certain type—stationary states whose behavior is particularly simple. Since the Schrödinger equation is linear, the behavior of the original wave function can be computed through the superposition principle this way.[5]
The projective nature of quantum-mechanical-state space makes an important difference: it does not permit superposition of the kind that is the topic of the present article. A quantum mechanical state is a ray in projective Hilbert space, not a vector. The sum of two rays is undefined. To obtain the relative phase, we must decompose or split the ray into components
where the and the belongs to an orthonormal basis set. The equivalence class of allows a well-defined meaning to be given to the relative phases of the .[6]
There are some likenesses between the superposition presented in the main on this page, and quantum superposition. Nevertheless, on the topic of quantum superposition, Kramers writes: "The principle of [quantum] superposition ... has no analogy in classical physics." According to Dirac: "the superposition that occurs in quantum mechanics is of an essentially different nature from any occurring in the classical theory [italics in original]."[7]
Boundary value problemsEdit
A common type of boundary value problem is (to put it abstractly) finding a function y that satisfies some equation
with some boundary specification
For example, in Laplace's equation with Dirichlet boundary conditions, F would be the Laplacian operator in a region R, G would be an operator that restricts y to the boundary of R, and z would be the function that y is required to equal on the boundary of R.
In the case that F and G are both linear operators, then the superposition principle says that a superposition of solutions to the first equation is another solution to the first equation:
while the boundary values superpose:
Using these facts, if a list can be compiled of solutions to the first equation, then these solutions can be carefully put into a superposition such that it will satisfy the second equation. This is one common method of approaching boundary value problems.
Additive state decompositionEdit
Consider a simple linear system :
By superposition principle, the system can be decomposed into
Superposition principle is only available for linear systems. However, the Additive state decomposition can be applied not only to linear systems but also nonlinear systems. Next, consider a nonlinear system
where is a nonlinear function. By the additive state decomposition, the system can be ‘additively’ decomposed into
This decomposition can help to simplify controller design.
Other example applicationsEdit
• In electrical engineering, in a linear circuit, the input (an applied time-varying voltage signal) is related to the output (a current or voltage anywhere in the circuit) by a linear transformation. Thus, a superposition (i.e., sum) of input signals will yield the superposition of the responses. The use of Fourier analysis on this basis is particularly common. For another, related technique in circuit analysis, see Superposition theorem.
• In physics, Maxwell's equations imply that the (possibly time-varying) distributions of charges and currents are related to the electric and magnetic fields by a linear transformation. Thus, the superposition principle can be used to simplify the computation of fields which arise from a given charge and current distribution. The principle also applies to other linear differential equations arising in physics, such as the heat equation.
• In mechanical engineering, superposition is used to solve for beam and structure deflections of combined loads when the effects are linear (i.e., each load does not affect the results of the other loads, and the effect of each load does not significantly alter the geometry of the structural system).[8] Mode superposition method uses the natural frequencies and mode shapes to characterize the dynamic response of a linear structure.[9]
• In hydrogeology, the superposition principle is applied to the drawdown of two or more water wells pumping in an ideal aquifer.
• In process control, the superposition principle is used in model predictive control.
• The superposition principle can be applied when small deviations from a known solution to a nonlinear system are analyzed by linearization.
• In music, theorist Joseph Schillinger used a form of the superposition principle as one basis of his Theory of Rhythm in his Schillinger System of Musical Composition.
According to Léon Brillouin, the principle of superposition was first stated by Daniel Bernoulli in 1753: "The general motion of a vibrating system is given by a superposition of its proper vibrations." The principle was rejected by Leonhard Euler and then by Joseph Lagrange. Later it became accepted, largely through the work of Joseph Fourier.[10]
See alsoEdit
1. ^ The Penguin Dictionary of Physics, ed. Valerie Illingworth, 1991, Penguin Books, London
2. ^ Lectures in Physics, Vol, 1, 1963, pg. 30-1, Addison Wesley Publishing Company Reading, Mass [1]
3. ^ N. K. VERMA, Physics for Engineers, PHI Learning Pvt. Ltd., Oct 18, 2013, p. 361. [2]
4. ^ Tim Freegarde, Introduction to the Physics of Waves, Cambridge University Press, Nov 8, 2012. [3]
5. ^ Quantum Mechanics, Kramers, H.A. publisher Dover, 1957, p. 62 ISBN 978-0-486-66772-0
6. ^ Solem, J. C.; Biedenharn, L. C. (1993). "Understanding geometrical phases in quantum mechanics: An elementary example". Foundations of Physics. 23 (2): 185–195. Bibcode:1993FoPh...23..185S. doi:10.1007/BF01883623.
8. ^ Mechanical Engineering Design, By Joseph Edward Shigley, Charles R. Mischke, Richard Gordon Budynas, Published 2004 McGraw-Hill Professional, p. 192 ISBN 0-07-252036-1
9. ^ Finite Element Procedures, Bathe, K. J., Prentice-Hall, Englewood Cliffs, 1996, p. 785 ISBN 0-13-301458-4
10. ^ Brillouin, L. (1946). Wave propagation in Periodic Structures: Electric Filters and Crystal Lattices, McGraw–Hill, New York, p. 2.
Further readingEdit
External linksEdit |
7b9e3ec047fbf41b | Wednesday, June 12, 2019
LS: Maybe — you are describing research to be done.
1. I noticed a couple of typos, presumably in your source but which you might like to correct:
About 80% through the text the paragraph starting with "The important part of the idea of the ether was that ..." is the start of Lee's reply and should therefore commence with an "LS: ".
And in Lee's penultimate response "LS: Maybe — you are describing research do be done" the "do" should clearly be "to".
1. Hi Mike,
Thanks for your attentive reading. I have fixed that.
2. I'm reading Smolins book at the moment so, thank you very much for posting this interview (^_-)
3. I think it is incorrect to say that Bohr is an antirealist about quantum objects. It is often thought that when he says “There is no quantum world” he denies the reality of the world. I like to think that his view is compatible with realism about the world, just that we cannot say the world is quantum in itself. More precisely: Quantumness is the result of our description of the world, when we have no choice but to use classical objects as measuring instruments of the micro objects.
We cannot see the world directly, only through instruments that possess certain properties (those that produce an outcome in each run). Bohr was saying, When we try to use these instruments to see the microworld, we end up in weird descriptions that are noncommutative, i.e. a quantum description. It is in this sense we cannot say the microworld is itself quantum.
4. Great stuff. Spotted typos "Therefor" and "w while" if interested.
5. It isn't that "The Ether" do not Exist, Instead, Its Existence that can not grasp The Ether ...
Therefore, As existent entities inside existence, What can we do about "The Ether" un-graspable and Unnatural "Nature" ??
Build a Network of Synced Vacuum Chambers all around the planet ... Just to try to get The Instruments of the Network Synced will be a technological challenge that can change the outdated local physics-approaches ...
Of course, That Will not inform about The Ether's Existence but at least, provide more room for physical research ...
1. Funny thing that the question of ether is still open in physics.
History: In 1916 Einstein received a letter from H. Lorentz about a (Gedanken) experiment regarding rotation. Lorentz' argument: Rotation can physically not be understood without an ether. Einstein fully agreed! He gave an own example saying: The Foucault pendulum is not understandable without the existence of an ether. But then he added: I can anyway not except an ether because it is in conflict with a principle my theory is based on. - So Einstein has clearly neglected observable facts which contradict his theory.
I have recently asked several professors of relativity if there is any new state of this discussion. --> There is none.
So, why does no one see the necessity to clarify this point? I would say that this question (and others of this kind) have priority before doing much philosophy on top of it.
2. Lorentz' argument: Rotation can physically not be understood without an ether.
That's inaccurate. Lorentz merely noted the well-known fact that the first-order Sagnac effect (for a waveguide cable circling the earth) is consistent with a stationary ether, although he agreed that it is also consistent with Einstein’s theory of relativity. Lorentz argued that Einstein’s relativistic interpretation is not the only possible one. Also, since we can detect absolute rotation, Lorentz argued that we cannot simply assume that it’s impossible to detect absolute translation. This (he said) can only be established by observation. Of course, he agreed that observation so far supports the principle of relativity, but he cautioned that “future observations may force us to abandon this hypothesis”. This is why he preferred to maintain the ether interpretation.
Einstein fully agreed!
Not at all. Einstein, writing in a conciliatory way to Lorentz (who he revered), went so far as to say that one could refer to the spacetime metrical field as “the ether”, especially in general relativity where the metric becomes a dynamical element of the theory. However, “this new ether theory would not violate the relativity principle any more, for the state of the guv=ether would not be that of a rigid body in an independent state of motion”. He went on to correct Lorentz’s scenario, by mentioning that the metric field in the vicinity of the rotating earth would actually not be perfectly stationary (as Lorentz supposed), but would rotate, albeit very slightly… referring to what is now called the Lense-Thirring effect.
He said no such thing. He merely pointed out that, because of the Lense-Thirring effect, Focucault’s pendulum would also precess, by about 0.01”/year, and lamented that it was too small to measure.
No, what he actually said was “I prefer the guv [metrical] interpretation to an incomplete comparison with anything material [i.e., Lorentz’s ether]”. He explained his view of this much more fully in the famous Leiden lecture a few years later.
So Einstein has clearly neglected observable facts which contradict his theory.
There is no mention in these letters, either by Lorentz or Einstein, of any observable facts contradicting Einstein’s theory.
6. "I don’t think there are many proponents of the Copenhagen view among people working in quantum foundations, or who have otherwise thought about the issues carefully. I don’t think there are many enthusiastic followers of Bohr left alive." There's definitely a *neo*-Copenhagenism out there. Anyone interested in foundations of QM who's up to the math should IMO read Klaas Landsman's "Foundations of Quantum Theory: From Classical Concepts to Operator Algebras", Springer, 2017, at least as a counterpoint to other approaches. There are others. Neo-Copenhagenism is in the process of finding a mathematical grounding.
There is a substantial body of philosophy of QFT that Smolin effectively doesn't mention by his choice of Philosophers. Simon Saunders and David Wallace have distinctively different approaches to other work on QFT, precisely because of their focus on Many Worlds that Lee Smolin mentions. How can one mention philosophers of physics without mentioning Hans Halvorson or Laura Ruetsche, not that they should be first from a long list of others? Since about 2000, my sense is that every philosophy of physics student wants to work on QFT even if they don't, quite possibly because they have been scared off by their supervisors or by the impenetrability of the physics literature. In addition, many mathematical physicists working on algebraic QFT and similar approaches have a distinctive philosophical bent.
7. Even if one stone makes one complete revolution, so what if it has spin-1 ?
8. I've never understood why, from the beginning, "observers" and "measurements" were not just considered special cases of the millions of possible natural environmental conditions that can cause the wavefunction to collapse into a singular eigenstate.
I think it was Penrose that suggested a threshold of mass superposition that might cause that, and proposed a practical (realizable) test for that. If it isn't a sharp threshold, it could be an asymptotic probability that is a function of the mass/energy involved.
I've never understood the insistence upon any consciousness or observer or "measurement" per se. Both consciousness and qualia have easy explanations due to the way we already know the brain functions subconsciously. We don't know half of all we'd like to know, but we know enough to dismiss any notion these are supernatural mysteries or require any special physics or forces at all; that is all voodoo religion concocted by people that fear the obvious truth.
Like all things in nature, we can take what occurs naturally but rarely, and engineer methods to make it occur reliably. In this case, the rare phenomenon is a wavefunction collapse that has a significant macro-world (human scale) observable consequence.
But that is all a measurement device does. And a brain interprets its sensors, which are also just measurement devices. The critical issue would be entangling enough mass to force a wavefunction collapse due to something like gravitational superposition (I do not rule out other candidates).
1. Decoherence of quantum states is modeled not according to spooky ideas of consciousness. Instead decoherence is just how a large action S = Nħ system that is classical on a coarse grain level takes the quantum phase from a small system. In this way the quantum phase of a measured wave function is not destroyed, but is "randomized" into a system with large number of quanta. In this way the apparent reduction of a wave function in a measurement is no different from the interaction of a cosmic ray with quartz that leave a track. We observers just happen to find this millions of years later. There is little reason to think our observing this track has some wave function collapsing properties due to consciousness.
9. What is your opinion on theories of time, that mainly exist in the humanities, where time is considered to be a subjective matter. Round 1900 Henri Bergson doubted on every issue, that was put under an a priori law, even the phenomenon of time. He said chronological time is such an a priori law and he wasn’t even sure whether it existed. For him passage of time was merely an interval between A and B. With measuring the passage of time one wouldn’t be able to grasp the essence of time. Can one measure the phenomenon of time that lies behind the interval?
10. At this stage in my amateur studies (30 years of adult contemplation), I'm starting to suspect that the restrictions of consciousness lead to the confusion over why the math has so many more degrees of freedom than we experience (quantum probabilities rather than linear determinism).
We don't know what consciousness is, while treating it like a fish treats water ... as a given. What if an arrow of time is merely a special perspective necessary to be conscious in the way that we are conscious. We only have faith in the past because we have a MEMORY of an ordered past, which we then PROJECT into a "future" that we never reach.
Memory and anticipation ... these might be the prerequisites for our kind of consciousness (which always resides in "now").
The universal wave function includes ALL ways to remember, and ALL possible predictions. We tend to experience a world of rules because consciousness is only possible for solutions to the wave function that "make sense" ... that reside in near the center of the normal distribution of possibilities (all the air molecules COULD be on one side of the room, or the shattered cup COULD rise from the floor and reassemble on the table... but don't plan on it.)
1. Len Arends: We do know what consciousness is and don't treat it like a fish treats water.
We can interrupt consciousness by interrupting brain function; this can be done with physical shock, by chemical interference, perhaps even by electromagnetic interference. We can do it by fuel deprivation; blocking the flow of blood to the brain or depriving it of oxygen. It can be done by internal interference; an accumulation of waste products in neurons interfering with their operation and leading to sleep; during which the blood vessels expand to flush this waste into the blood stream, but the pressure on the neurons they serve prevents them from proper operation. Thus the necessity of unconsciousness (although the dolphins have evolved a clever system of shutting down 1/2 of their brain for "cleaning" at a time, thus sustaining 24/7 consciousness).
The fact that we can interfere with consciousness in so many ways, and chemically alter consciousness in so many ways, is all the proof we need to say consciousness is a function of physical brain operation.
We don't treat consciousness "as a given", we treat it as a product of the mechanical operation of a device called the "brain". Nor is there any need to claim humans are special in this regard, many species exhibit behavior that is only plausibly explained by consciousness.
Consciousness is a progression of thought; without significant outside stimulus it is a progression of self-stimulus in the brain; thoughts trigger thoughts, endlessly (or until consciousness is interrupted by the above methods, or the death of the brain). With outside stimulus, consciousness is generally focused on processing that. Most of our lives it is a combination of both outside and internal stimulation.
I find nothing particularly mysterious or magical or supernatural about consciousness (or qualia) itself.
I will grant that consciousness requires time, without time there can be no change of state, and because consciousness is entirely dependent upon neurons processing electrochemical inputs and producing electrochemical outputs, it is entirely dependent upon billions of changes of state per second. But there is nothing fundamentally different about the passage of time for a living brain versus the passage of time for a kilo of carbon-14 or uranium; in both cases they will experience changes of state over time.
And of course we reach the "future", I planned to write this sentence before I wrote it, and I have reached that future.
2. Dr Castaldo:
Lee Smolin says:
It's not that simple.
3. Mike: Yes, it is that simple. I don't accept Dr. Smolin as an expert on this topic.
Just like trying pretty mathematical tricks to posit new particles that never showed up in the LHC, speculating about qualia and consciousness without reference to the thousands of experimental results on actual brain function and organization, particularly in damaged brains, is pointless fantasizing.
However, when grounded in such experiments, the clues they give us to how consciousness works, and how the subconscious works and supports that, give us enough information to see the explanation of both does not require anything new in fundamental physics, or time, or space, or anything else.
I work on the (currently) fastest supercomputer in the world; Summit, but the interactions between its trillions of parts is very closely defined. It is engineered that way. The brain is not engineered, and its interactions between parts is not closely controlled at all, its present form is the result of evolution and millions of "whatever seems to work right now" randomized additions and changes.
The result is a patchwork of spaghetti code, legacy bugs that didn't matter when they were coded and can't be corrected now that they matter, and broadcast communications that influence parts that have nothing to do with the problem at hand.
Read up on psychological "priming" experiments for examples of how completely unrelated minutia can influence decisions that can have real life consequences. What is behind those "priming" errors in thinking is broadcast information to all parts of the brain, so neural modules with even a weak supportive link to a word or feeling (like 'cold') become more likely to feel like the "right answer" versus neural modules that would respond to the opposite ('warm'). They are 'primed' by the broadcast to be more responsive.
That gives us a clue to qualia: Subconsciously our brain state consists of millions of subconscious neural "models" that have been processing information from the outside environment and other internal models, and are in various states of excitation. My qualia of "red" is the state of having everything I know about "red" primed, and everything anti-red not primed. That includes related emotions. It can influence my thoughts, making some neural models more likely to respond than others.
There is no evidence the brain does anything more than this, or requires any kinds of forces or physics beyond what we already understand.
Claiming it feels different is not a convincing or logical argument, it is the equivalent of saying ice feels different than water, thus ice must be a different substance than water. Wrong. The solidity of ice (or the liquidity of water, depending on how you want to look at it) are emergent properties of H2O that depend on the temperature and density of H2O molecules.
Qualia and consciousness are explainable emergent phenomena of a neural system complex enough to fall into self-triggering loops. Disrupt the signaling and disrupt consciousness (but not necessarily the subconscious activity). Thoughts are broadcast signals that prime neural modules and thus trigger other parts of the brain; thus more thoughts, more qualia, more feelings, in a feedback loop until something external takes priority, or we run low on energy and have to sleep (to clean the brain of waste products it produces in the operation of neurons).
These conclusions are not hard to reach. I can't say what Dr. Smolin is having trouble explaining, I will wait and see if what he publishes is consistent with what we already know of brain operation.
I personally don't regard the broad outlines of brain operation as particularly difficult to comprehend, or glossing over any critical observed phenomena. It really is that simple.
4. Dr Castaldo
You say:
Okay, so please do explain. Just how does consciousness emerge? Just how complex? Are there equations? What is the ACTUAL physical correlate of your qualia of red? As opposed to green? As opposed to a color-blind person? As opposed to your qualia of a salty taste? As opposed to not noticing a color at all?
5. Mike: Sure, let me give you a few semester courses in experimental psychology, neurobiology and artificial intelligence in comment box.
Read my comments related to neurons and consciousness and pattern processing in Dr. Hossenfelder's other thread;
Here's the "Father Guido Sarducci's 5 Minute University" version.
Neurons, individually, are pattern matchers; they respond when certain patterns of input occur within a window of time determined by electrochemical decay of signals. They are teachable by exposure to patterns.
Neurons compose into networks to model patterns and signal recognition to other neurons. This is efficient because we tend toward decomposable and hierarchical patterns. My brain contains a model of a "wheel" that can itself be a component of many larger models: Carts, cars, baby carriages, trains, pulleys, lawnmowers, unicycles, bicycles, tricycles, ad infinitum. What do they have in common? Not much universally, a circle that turns and somehow bears a weight. Wheels are composed of may-be-present sub-models: spokes, tires and rims, hubs, brakes.
If you want another example, consider the myriad manifestations of "door". You can even have a door IN a door; e.g. to look out of, or to let the dog in and out.
This approach is efficient because the combinatorial explosion of features for a "door" reduces the info load to be carried from every door you have ever experienced to a generalized model that can be 'decorated' with specific features to represent a specific door; perhaps with a specific "handle", and "lock", and means of moving out of the way (up on rails like a garage door, sliding sideways on wheels like a hidden door, opening from an edge on hinges, dilating parts like on a space station, spinning on a central axle, etc).
This also has the beneficial effect of letting us recognize as doors combinations we have never seen; or even mechanisms we have never seen. In a fantasy show Joe uses a knocker mounted on a solid brick wall, someone yells "come in!" and we see the wall vanish, and reappear after Joe walks through the opening. That is still a 'door'.
Our brain is composed of millions of these interconnected models. They work backward and forward. Their likelihood of signaling is changed, temporarily, when some of their inputs match the pattern, but not all of them. When I say (to most Americans) “Red, White and”, their neural model of BLUE has been primed so strongly by these three words they will almost undoubtedly finish this sentence with the word “blue” (which is one input to their neural model of BLUE) as the most probable next word. It isn't the only possible word; I can add different priming and defeat that result: “There are three principle varieties of wine, Red, White, and” (and nobody says “blue”).
The physical correlates of the qualia “Red” are all the many thousands of neural models being primed (electrochemically readied for short-term firing) by the mention of the word Red. This will include visual models where one has seen red, audio models if one has heard “red”, but even in the blind and deaf it would include all mental models in which “red” is considered a component.
But not every model is equally primed; some models have nothing to do with red, and do not become more likely representations of reality if “red” is seen, heard or sensed. They may become less likely. The physical qualia of “red” is this collection of priming which has a physical electrochemical component within trillions of neurons, and often firing (due to previous priming with other senses/words), which will include emotional content (also a product of neurons). We feel something about “red”, technically unique to each of us, but by similar experiences in the world, we have much in common too.
6. Dr Castaldo
It seems you have never heard about the problem of zombies, have you?
And have you dared to think about that a bit longer, without your professional prejudices?
Best regards,
7. @Wojciech: I am aware of the non-problem of zombies, I do not regard it as a valid argument. If you had bothered to read what I wrote, here and on the "free will" thread; you would probably have gathered this.
The proposition is a being that acts like us, but lacks conscious experience or sentience, but behaves as if it has both. I reject that out of hand, it is as false a premise as proposing an ax that has no handle or blade, yet chops wood just like a real ax. It is magical thinking and as pointless an argument as debating how many angels can dance on the head of a pin.
It is not how the brain works and contrary to observations and facts.
Asserting I have "professional prejudices" because I disagree with you is actually the fallacy of arguing from authority; or that I am wrong "because you said so."
Instead of insulting my integrity with "prejudices" or implying I am uninformed or stupid, you can just say you disagree with me.
Or you might pretend you are a scientist and provide a logical refutation of my argument, or a logical challenge.
But I understand if you cannot; insults are the go-to strategy for people that have run out of rational argument, so their emotions step in to take up the battle.
I have presented plenty of material here for laymen that are actually worth challenging and arguing about, we don't have to debate the existence of fairies.
8. My head is with Costaldo, but my heart is elsewhere: I'd really like to be believe that consciousness is a special ontological category, but the scientific evidence is against it.
But if consciousness were special in some way, might that make it possible to accept a naive Copenhagen (or von Neumann) interpretation of QT?
9. Andrew: If consciousness is NOT special in any way, then an "observer" is just a component of the environment and it follows that wavefunction collapse is, at least in some circumstances, environmentally caused.
But every particle is in some environment, so environmentally induced collapse must involve some kind of threshold; and that becomes a research direction.
Perhaps there is a limit on how much mass can be in superposition, or entangled. Perhaps, as Dr. Hossenfelder has mentioned before, there is an issue with conflicting gravitational fields of a mass in super-position that is in more than one place at once.
Trying the two-slit experiment with larger and larger masses is one route to exploring such a limit.
To answer your broader assertion: When I claim consciousness is just the operation of the brain obeying the known laws of physics; I am not asserting that minds, emotions and pains (of people and animals) are dismissible; in fact I'd say in the end I prioritize those over just about anything else.
The reverence I have for life, love and happiness does not demand they have a supernatural or magical component beyond comprehension. In fact dismissing any magical component broadens our research horizons so we can find and address the physical issues standing in the way of consciousness and quality of life.
IMO, consciousness does not have to be magical to be viewed as a special condition deserving its own set of rules for how we behave toward things with it, and without it. I feel the same thing about pain and emotions in our fellow animals.
Consciousness is already special. For me that specialness does not have to be justified by any new physics, nor does it have to be an unfathomable mystery.
10. " issue with conflicting gravitational fields of a mass in super-position that is in more than one place at once."
Penrose's idea?
A nice sentiment, but aren't you trying to eat your cake and have it too? I don't think you can assert consciousness is reducible to chemistry and still maintain a reverence for it.
"Consciousness is already special. "
No, feedback mechanisms can be simulated by computers. Would a conscious computer be "special"? Maybe. But that would be a lesser distinction than humans crave.
A dark suspicion of mine is that the universe is an attempt to simulate things that are logically impossible: consciousness and identity. We might reverence those illusions because we mourn their impossibility.
More concretely, how about my question, "But if consciousness were special in some way, might that make it possible to accept a naive Copenhagen (or von Neumann) interpretation of QT?"
If we postulate that consciousness is a new order of reality, could that allow a less tortured interpretation of QT?
11. Andrew Dobrowski: Consciousness is already special in the normal sense of special, I do not mean that in any supernatural sense at all.
Can a computer be conscious? I see no reason why not; I am a computer scientist and mathematician, I've worked and developed new algorithms in statistics and artificial intelligence. I think electronics could accomplish the electro-chemical signaling of neurons, so in principle I'd say yes, computers can be conscious. But I already think mice, rats, dogs, pigs, cattle, dolphins, elephants, and many other animals are conscious beings, so I already think consciousness is a "lesser distinction than humans crave". I don't really see how "what humans crave" enters into the scientific assessment of what counts as consciousness.
If consciousness were special in some supernatural way, perhaps, but I don't think it is. It is special in the sense it is rare, and an emergent property of a complex brain, but it has no fields or chemistry exclusive to it.
Sure! But there is no justification for that postulate, I can postulate that God exists, or incomprehensible magic exists, or "life force" exists, and invent rules for any of these fantasies that explains everything! That doesn't make the fantasies any more plausible.
There is no evidence consciousness is a new order of reality, other than the ego of wanting to believe we humans are unique in the universe. But of course there is also no evidence we humans are not unique in the universe! As far as we can discern, there was about a billion years of life on this planet that resulted in modern human level intelligence perhaps 100,000 years ago. We have evolved a mental capacity for generalization, abstraction, and long term planning that seemingly no other intelligent animals have evolved in the past few hundred million years of reasonably intelligent animals. We have seen no evidence of this happening with any other species, past or present, here or elsewhere.
Although we have barely begun to look elsewhere, it may well be that life evolves readily in the universe, but always stalls at the level of intelligence of dinosaurs, chimps, dolphins and elephants, all of which clearly have emotions much like humans, all of which make and use tools and can be inventive. But they don't have the special mental capacity of humans.
So, as I said, we are already special, for all we know we are the only technology-species in existence, and even if we are not, we are unique amongst our fellow life forms.
But our specialness and uniqueness (thus far) does not have to be explained by any new physics; it is a product of complexity and precision in a brain made of the same stuff and working by the same principles we see in other animal brains.
12. I think we largely agree, with a difference in attitude: I don't believe in the soul but I'd really like to, while you seem quite happy without it. I fear that without the soul, everything we value about life is illusory. Self-learning feedback mechanisms are remarkable, but they are not precious.
"There is no evidence consciousness is a new order of reality"
That's too categorical: there is evidence, it's just not very persuasive. For starters, anything that might make QT easier to deal with earns some ontological credits. That even John von Neumann chose to formulate QT in terms of consciousness is a significant data point.
I know this forum is down on beauty, but if a theory _does_ work, and is more elegant than a competitor, shouldn't that count as at least weak evidence in its favor?
11. In reading this I am a bit relieved to see that Smolin is not trying to appeal to the pilot wave Bohm QM interpretation. I usually think that when people try to devise a classical underpinning to QM that they have fallen off the horse. Some luminaries, 't Hooft for example, have gone this route. Smolin did not seem to make that sort of appeal.
When it come to the nature of time there have been those who argue time is not real, such as Barbour and those who do say it is real, Carroll and Smolin thinks time is real, but irreversible. General relativity is a bit “schizoid” on this. On the one hand the differential equations of general relativity are second order, which is a clear signature of a theory invariant under a Z_2 time direction change. On the other hand we have black holes with the area theorem that suggests an irreversible nature to time. Of course a black hole with the delay or tortoise time t' = t - 2m ln|1 - 2m/r|, r = ct, suggests that all the information that enters a black hole is still there. The only thing we have available to us are the most UV of quantum spectra, which are Planck or string scale quantum gravity modes. These in our more ordinary world are not accessible to us. A duality between UV and IR physics would then be
(q-gravity UV physics) = (IR physics of gauge fields and fermionic matter)
This is a way of writing Einstein's field equation!
If we are to appeal to philosophy and philosophers I would tend to invoke Immanuel Kant. He told us that what we observe is a domain of phenomena, but there is an underlying noumena that is fundamental. In a funny way the breaking of symmetry in the standard model is of this nature, for the false vacuum of symmetry and the symmetry of the Lagrangian are not directly accessible to us, but rather the physical vacuum physics restricted away from the symmetry of the Lagrangian. With quantum gravitation something similar is maybe afoot. The global theory of a grand symmetry may simply be unstable, and what we can only directly observe are local symmetries of a more restricted nature. In this sense then time is a local process. For instance, in a multiverse setting with an inflationary manifold it may not make sense that time here can to extended by parallel transport everywhere. So while in Kant's sense there may be a global meaning to time that is reversible, in a local setting what we measure as time is not reversible --- at least FAPP.
And what horse would that be, mathematicism? Seriously, the primary objection to the Bohmian model seems to come from theorists who object to the fact that it retains the the classical concepts of waves and particles, making the quantum realm fully contiguous with the rest of physical reality.
The problem with that apparently, is that it puts a crimp in the the cottage industry of scientifically baseless metaphysical speculations such as MWI. Realism is such a drag on the theoretical imagination.
2. Actually in his book Smolin spends a lot of time presenting pilot wave theory as a credible alternative.
3. Bohm wrote a paper in 1952 on how a form of quantum mechanics might be formulated as purely local. He wrote the wave function as ψ = Re^{-iS/ħ}, which is called a polar form because it is equivalent to how one represents a vector in the complex plane. This wave acted on by the Schrödinger equation gives a real valued differential equation that is a modified Hamilton-Jacobi equation
∂S/∂t = H - (ħ^2/2m)∇^2R/R.
The last term is called the quantum potential, which is thought of as the quantum modification of this equation of classical mechanics. This is the dynamics for a classical-like particle that Bohm designated the beable, for “Be Able.” The imaginary part is a continuity equation for hydrodynamic flow that from memory is something like
∂R/∂t - (1/2m)[R∇^2S + 2∇R·∇S ] = 0.
This is a conservation of the pilot wave, and one can derive so called guidance equation for the pilot wave from quantum currents undergraduate students compute in elementary quantum courses. There is nothing wrong with this. It is I think better to consider the unitary operators in polar form with the state being static, say taking a Heisenberg picture of this. Still there is nothing completely wrong with this, but it has “issues.”
What Bohm tried to do is to show that this means QM is purely local. John Bell was in fact highly motivated by this and tried to prove that this is the case. He failed, and I give a brief description of Bell's theorem at the end here. This effectively torpedoed the idea that Bohm's representation of QM demonstrates a more fundamental locality. It has other problems, where if you try to do the Klein-Gordon equation this way you find the classical-like beable actually moves faster than light. Yet we can look at matters of quantum chaos, and I think Bohm's QM has great utility there, and I think aspects of quantum chaos lie with quantum gravitation. However, it does not really appear to tell us in any way that quantum mechanics is “wired up” on some substratum by classical principles.
I had read that Smolin appealed to Bohmian QM to try to satisfy Einstein's objections. This is is a problem I can see with his program and why if I read his book it will be borrowed. I tend to think that people who appeal to Bohmian QM as a way to restore a classical objectivity to physics have in some ways lost their way.
Since 1809 we've know from experiment that Malus's law always works, that is to say the amount of light polarized at 0 degrees that will make it through a polarizing filter set at θ degrees is cos^2θ. For example if θ = 30 degrees then the value is .75; if light is made of photons that translates to the probability any individual photon will make it through the filter is 75%. The Bell inequality with polarizers is if one polarizer is set 30 degrees relative to the other, then think of the photons as polarized in the way a nail has a direction. 30 degrees is a third of a right angle, and so if we think of the photons as being like nails aligned in a certain direction, then at least 1/3rd of these nails would be deflected away. This is why an upper bound of 2/3rds of the photons in a classical setting will make it through, or less will by attenuating effects etc. But the quantum result gives 3/4. This is a violation of the Bell inequality, and with polarizers it is found in a "quantization on the large." Of course sensitive experiments work with one photon at a time, but the same result happens. This is done to insure there are not some other statistical effect at work between photons.
4. My reading of Smolin's book does not jibe with "I had read that Smolin appealed to Bohmian QM to try to satisfy Einstein's objections." I did not see an appeal in relation to Bohm. Following is a brief excerpt from The Singular Universe where Bohm's work is referred to in a limited way on page 488. Perhaps this will encourage you to reconsider looking beyond what you read.
"[t]he goal must be to discover the cosmological theory that quantum mechanics approximates, but it channels that search in a certain direction. It is notable that the hypothesis of a preferred global time coming from our program is consistent with the need for a global time to express a non-local hidden-variables theory. This has been seen explicitly in relativistic versions of Bohmian mechanics [88] and spontaneous collapse models [90] which reproduce the Poincare invariance of the predictions of quantum field theory while breaking that invariance for predictions that diverge from those of standard quantum theory."
I don't know where you got this idea but you are seriously mistaken:
...all such "hidden-variable" completions of quantum mechanics must either be nonlocal (as the Bohm interpretation is)... -,_Einstein%E2%80%93Podolsky%E2%80%93Rosen_paradox,_Bell's_theorem,_and_nonlocality
Bohmian mechanics is manifestly nonlocal....Thus Bohmian mechanics makes explicit the most dramatic feature of quantum theory: quantum nonlocality -
With regard to Bell's inequality, it showed that local hidden variables would predict results differing from those of QM. Subsequent experiments bore out the QM predictions, meaning that local hidden variable theories were ruled out. Bohmian mechanics has non-local hidden variables and violates Bell's inequality in exactly the same way that standard QM does.
Other than expressing aesthetic dissatisfaction with pilot-wave theory, while exhibiting a basic misunderstanding of it, you don't seem to have any substantive objection. This suggests that you perhaps simply prefer metaphysical interpretations of QM, like MWI. You seem to dislike Bohmian mechanics because it is realistic.
6. John Bell derived his inequalities in 1964 in a paper titled On the Einstein Rosen Podolsky Paradox, where he formulated a hidden variable in a way that satisfied realism and locality. His original intention was to show that QM satisfied these two conditions. However, it was not that difficult to show QM failed to do so. Bell was a panegyric for Bohm's QM and wanted to to show there was locality in QM as was thought to be the case for a beable in the pilot wave.
I do not "hate" Bohm's QM so much as I see it as a quantum interpretation with limited scope. It is considered nonlocal because to model measurements the quantum potential is modified "by hand" in a way that is a sort of collapse.
I can't comment too much more on Smolin's book outside of reviews I have read. It does appear Smolin is trying to answer Einstein's objections to QM in a way that restores classical objectivity.
7. There is nothing in the paper you cite, On the Einstein Rosen Podolsky Paradox, that supports your claim that Bell's "...original intention was to show that QM satisfied these two conditions" (realism and locality). Here are the opening sentences of that paper:
This begs the question, compared to what do you consider it limited in scope, a metaphysical interpretation like many-worlds?
12. I think there still is the great danger of beauty leading physics astray, once again. Philosphy may be an inspiration for a new model, maybe it can also be a guide to develop it further, but it cannot fully replace math and observation.
And philosophy can also be worn as a mask by intuition to sneak into science and poison it from within, because scientists can be "all too human".
So when there are different competing models based on the same original mathematical physics, and one of them gets the support from philosophy, it gets easy to reject competing models with sloppy reasoning. That can then become self-reinforcing: if one disallows defending a model that lost in the area of philosophy, then it's easy to "prove" that the competitors of the "winner" are mathematically wrong as well, so it is deemed to be the mathematical winner without having actually proven it. Which can become fatal if the winning model causes internal contradictions later on, when everyone has forgotten that the other models were cheated out of having their fair chance - and so they get cheated once again now that they are needed, since everyone believes they were disproven.
13. I agree that time is fundamental. In fact, I think it is THE fundamental. Considering Special Relativity, we see that approaching light speed, time draws to a halt and dimensions (ahead) tend to zero. My contention is that motion cannot directly affect dimensions (or space), but it apparently does directly affect the rate of passage of time (and in a circular way - Lorentz).The only way this can happen is for the passage of time to be somehow wavelike ( faster then slower then faster then,....etc). This would also explain the emission of photons in waves of varying numbers (light intensity). The wave nature of light would be a demonstration of the wave nature of time. I could go on but I would have to rewrite my book "The Binary Universe".
1. I suspect conciousness is the fundamental. All systems of physical relationships that allow our type of consciousness are realized "somewhere" in a highly multidimensional mathspace.
The "arrow of time" may just be one of the restrictions necessary for our type of consciousness ... "memory" and "anticipation" interacting in an elusive "now."
2. Time is the geometry of consciousness. Particles are how we model events at the surface of intersection of consciousness and... some sort of possibility space... but today's physics wants to label these ideas "the measurement problem" and sweep them under a small rug.
14. During the first decades of the 20th century, some lively philosophical debates appeared about the concept of time (far from the scientific views of Einstein). To a certain extent, Henri Bergson and William James developed similar ideas independently. And Whitehead has been largely inspired by James’s ideas, particularly with what James called the ‘epochal theory of time’, which is based on Zeno's paradox, which traces back to the 5th century BC.
It is related to the question of (infinite) divisibility of space and time. Curiously, while William James did not know nothing about QM, he ended up with the notion that becoming - or the flow of experience (not time per se) - must proceed with discrete units, ‘drops of experience’, or ‘discrete pulses of perception’. On a very different ground, It is difficult to avoid thinking about Max Planck…
But this is only one aspect of a more extended thought, where several important concepts are tied together: time, process, becoming, continuity versus what they called then ‘atomicity’…
Some meaningful consequences of this debate can be found in Whitehead’s more abstract ideas, for example through the distinction between genetic and coordinate analysis in 'Process and Reality'.
A good article is available online about James’s ideas: ‘William James and the Epochal Theory of Time’, by Richard W. Field, 1983.
1. Great info, thanks. I've long been inspired by Whitehead in my work.
15. If a professor of English literature were to announce that for his discipline to progress a new language must be created with a new grammar and a new vocabulary and, furthermore, that new literary forms akin to our current epics, sonnets, novels, etc., must be created if there is to be further progress then we would likely view the study of English literature as having brought us to a dead end. Likewise, Dr. Smolin's desire for new "first principles" reminds one of Woody Allen's statement, "The only thing I regret in life is that I was not born somebody else." All of this, together, is a tacit admission of defeat in the application of physics to understanding the universe.
One can understand the need to tinker here and there with any discipline: to amend, adjust, revise and revivify hypothesis in the light of new information. But, such remodeling, as occurs in all the sciences all the time, is different from scrapping everything in order to start over at Ground Zero. It is, furthermore, a ludicrous consul of perfection to suggest that we need can blithely summon a new physics from the "vasty deep" when what we are stuck with right now took better than three centuries to evolve.
If three centuries of rigid materialism (or, if one prefers, "Naturalism") pushes us head-first into a brick wall with no further way in sight then perhaps it was the wrong path to take from the start. Thus, I am not surprised that Dr. Smolin sees possible value in a discipline long scorned by many physicists -- philosophy.
I am advanced in years and will not live to learn whether Dr. Smolin's gets his new "first principles." But,physics seems at this moment to he like the ending of "Waiting For Godot" -- "We can't go on. We'll go on."
1. Well said A. Andros! Initially I didn’t interpret Smolin quite that literally — as in physics now needs a full reboot. I figured that his call for new first principles was just a bit of rhetoric. But given his advocacy of traditional philosophy I’m starting to think that he might have been speaking literally here as well.
As I see it the problem with traditional philosophy is that it has not developed a respected community of professionals with their own generally accepted understandings to provide. Physics is quite the opposite. But how could philosophy help physics today when dedicated philosophers provide no agreed upon answers regarding the questions that they explore? Is he advocating some kind of Zen thing, or “journey” rather than “destination”?
Actually I do consider the subject matter of philosophy important however. I believe that science suffers tremendously today without generally accepted principles of metaphysics, epistemology, and value from which to work (though our mental and behavioral sciences should suffer the most).
My single principle of metaphysics states that to the extent that causality fails, nothing exists to discover. This would effectively put physicists who endorse Heisenberg’s uncertainty principle as and ontological rather than epistemic void in causality, into a supernaturalist camp.
Also pertaining to physics is my second principle of epistemology. It proposes one exclusive process by which anything conscious, consciously figures anything out: It takes what it thinks it knows (or evidence), and uses this to assess what it’s not so sure about (or a model). If generally accepted then all sorts of suspect practices should be penalized, such as eschewing empirical evidence for mathematical beauty.
2. "Waiting For Godot" -- "We can't go on. We'll go on." it's apply as well for theorical physics as for the ecologicals problems !
16. I basically agree with the L. Smollin's views, but I would like to add some further input:
- Having paradoxes and "brick walls" in the current theories of physics is unavoidable and good, since they indicate what is the domain of aplicability of those theories. For example, a 19-th century physcist would be puzzled with the Rudeford atom model, since according to the classical EM theory the atom would be unstable. So even without any experiments he would conclude that at atomic distances a different theory is needed. Similarly, if we assume that the standard QM applies to the whole universe, there is a problem of the observers and the measurement of the wavefunction of the universe, as well as the problem of how to define probabilities for a universe. The current QG theories do not address this problem, and they are essentially concerned with a problem of how to incorporate GR into a QFT framework.
- As far as the role of philosophy in formulating new theories of physics, so far, it was not essential, but as we expand the dominion of aplicability, it will be helpful to understand the concepts of metaphysics, as well as the basics of formal logic.
17. “Reality is nothing but a mathematical structure, literally”.
That is your first principle.
From the simplest structure that relates set of numbers to others the whole of the physics of reality falls out,QM,QFT,GR, space, time ,energy all one one coherent system.
1. Your first principle is nothing but the ubiquitous mathematicism ( that is the primary cause of the dead-end state of modern theoretical physics. Mathematicism is an ill-considered philosophical belief that has no basis in science, mathematics, or logic.
Physics is the study of physical things and events; it is not (or not supposed to be) the study of the abstract mathematical musings of reality challenged individuals who seem incapable of distinguishing between their mathematical fantasies and empirical reality.
But for the sake of argument let's grant you your first principle. And on the basis of that first principle, I request that you provide, for each of matter, energy, space and time, a definition that is coherent, concise, and consistent empirical observations.
2. This comment has been removed by the author.
3. Thanks Bud for your reply. As you know giving references on Bee's blog is like shooting oneself in the head. However, if you google the main sentence with FQXI plenty of information will come up (or simple click you know where:)) on the subject that has been thoroughly discussed in FQXI contest with two camps.
I am not the first to come up with the idea (the idea is been around in some form since Plato), but I have developed it independently with substantial "proof". Thanks again.
18. Time, Time is on my side.....yes it is...
19. I wished Smolin had mentioned some of the other benefits of philosophy to scientists: A greater willingness to question one's deeply held assumptions, a greater drive to examine (non-mathematical) arguments for fallacious reasoning, and most importantly, a greater awareness of one's philosophical biases before they turn into cognitive biases.
Everyone has philosophical biases, but not everyone is aware of what they are, and this lack of awareness can facilitate the descent into cognitive bias.
This all probably sounds nice in the abstract, but I don't think people know what do with this until they see a concrete example. An example I consider highly salient is as follows:
I consider the belief that there has to be a deeper theory unifying GR with the SM a belief which reflects a particular philosophical bias, one historically well-grounded in past successful unifications and supported by reasonable theoretical arguments, but still reflecting a philosophical bias.
What happens when one does not recognize it for what it is, i.e. a belief reflecting a particular philosophical bias?
One is liable to mistake it for a "fact" about the world, in the sense that, say, gravity itself is a fact about the world.
What it would take to consider quantum gravity a "fact" about the world is an observation or experiment in which gravity is observed to be quantized. Absent that, an experiment or observation in which a system which obeys quantum laws is shown to produce a gravitational field would also do.
At this time, we have neither.
(Continued in next post)
20. (Continued from previous post)
Experiments or observations in which quantum systems are affected by gravity (e.g. gravitational bending of light) will not do, because they can be explained by a rival hypothesis: QFT in curved spacetime. So long as observations which suggest there must be a theory of quantum gravity can be also explained by a rival hypothesis, we cannot be conclusively certain that quantum gravity is in fact the correct explanation and therefore we cannot use them to promote it from a belief to a "fact" about the world.
(It might not be clear how QFT in curved spacetime works in strong gravitational regimes, but since we have no relevant observations in such regimes, this can only be considered at best as an additional theoretical argument in support of the belief.)
Yet, as best as I can tell, lack of awareness of this philosophical bias has caused almost an entire community of physicists for multiple generations now to consider the existence of an as yet undiscovered theory of quantum gravity a fact.
Some even mistake the gravitational bending of light as a conclusive argument for quantum gravity, a mistake I attribute to cognitive bias caused by lack of awareness of one's philosophical bias.
This conflation of belief with fact has real consequences because it closes off various other possibilities for advancing our fundamental understanding of nature. If the physics community were more skeptical of quantum gravity, then that would likely spur a stronger drive to come up with ideas to experimentally test it independent of the particular model or experimental approach.
For example, I think it would be very challenging and expensive, but doable, to measure the gravitational field of an ultra high energy beam or pulses of light.
The beauty of such an experiment would be that even if the result comes out exactly as expected, it would still be a resounding success, something that cannot be said of a particle accelerator slightly larger than the LHC.
Just like the discovery of the Higgs boson, which added to our knowledge even though it was completely expected, the detection of a gravitational field produced by light exactly as expected would add to our knowledge by promoting a fundamental belief about the world into a fact, namely quantum gravity.
But if one already considers quantum gravity a fact about nature, then such experiments would seem nearly pointless or at least not worth the effort. Consequently, there is, as far as I can tell, no strong push for this kind of experiment.
Getting back to my main point, philosophy can help us not just by the ways Smolin described but also by helping us question our most deeply held and therefore most "obvious" assumptions, by helping us think clearly and especially be helping us recognize how our philosophical biases frame our understanding of the world.
1. Sabine has posted on detecting quantum superpositions of spacetime variables This is focused on the Aspelmeyer group experiments. I read a paper a couple of months ago which proposed a way of looking at superpositions of metric configurations. I would have to spend some time looking that up again. Roger Penrose in his Road to Reality proposes an experiment that connects wave function collapse with quantum gravitation. I am less certain about that, for his so called R-process I doubt is fundamental and more a phenomenology, but he may be right at the phenom-level.
In a nutshell these ideas are all forms of the Cavendish experiment. This is the torsional bar with two masses suspected between two other masses so a light reflected off a mirror on the bar is measured to find the strength of the gravity force. This was used to find Newton's G = 6.67×10^{-11}Nm^2/kg^2. The Aspelmeyer group uses a mass on a spring, but the idea is essentially the same. We might think of the Cavendish experiment where there is some quantum superposition of the metric configuration with these masses. A laser beam reflecting off this mirror is then in a superposition of two different, or should I say very slightly different, directions. This is then tacitly a sort of beam splitter.
In an exercise I did many years ago I looked at a three body problem with one large mass, a medium sized mass and a very small test mass. The two larger masses have Newtonian dynamics and gravity, with the test mass is close to the largest mass and exhibits general relativistic effects. Think of the sun, Jupiter and Mercury, and ignoring the other planets. The upshot, which seemed to be working out, is that GR breaking of the invariance of the Lense vector or the periapsis shift of the orbit in the tiny mass amplified the Lyapunov exponent in the chaotic dynamics one would get with just Newtonian mechanics. I never did anything with this. However, for the purposes of detecting quantum gravitation we might consider the double pendulum. This has for swing angles sufficiently large chaotic dynamics. So for this in a Cavendish-like experiment might then illustrate some amplification of chaos that bears signatures of quantum gravity.
These sorts of experiments are important for motivating quantum gravity physics. They are just the start. These experiments are tough to accomplish. As I think spacetime is a sort of coherent state configuration of entangled states, these are then a classical-like subspace of the Hilbert space, which may be one reason spacetime is so obstinate in revealing quantum properties. However, remember Einstein's coefficients, where the emission of any set of quantum states will have both A and B coefficients. This means that even if there are primarily coherent states there will still be some mixed states as well. We will then be trying to measure these as small fluctuations.
21. My thanks to all three of you for creating and posting this interview, which was one of the most intriguing ones I've read in a long time. Lee Smolin, please keep at it. While I am just a poor bewildered information specialist, my understanding of search spaces tells me you are on the trail of something quite important.
22. Also, I'll add one remark about the concept of a "unique" foliation: From an information perspective, I have been persuaded over the past couple of decades that our universe is both hugely more (a) efficient and (b) devious than the models we use to describe it.
What I am suggesting is that many of the seemingly "universal" features of physics are what computer science would call virtual. That is, while they do adhere to a very strict set of internal, mutual, and temporal self-consistency rules, they do not actually exist until "funded" by the insertion of energy. Planck foam would be a good example of an, um, seriously underfunded virtual physics concept.
Applied consistently, virtualization allows finite resources (finite mass-energy) to pose convincingly as all sorts of seemingly universal effects, when in fact those effects are nothing more than highly localized virtual charades that exist only because enough energy came together for a while to instantiate them... and even then, only to the level of detail possible at those energies.
Virtualization has significant implications for issues such as vacuum density, the block universe, and the detailed nature of foliations.
1. I like this. I've always wondered how the universe makes instantly all those quantum calculations that take weeks running on a supercomputer.
23. Moved this comment. I improperly nested it as a reply to a comment.
This is at best a dodgy claim. It may be valid mathematically while being invalid in the context of physical reality. If the cosmos is not, in fact, a unitary entity with a preferred frame, the ability to fashion such a mathematical model means that the so-derived model will not accurately describe the underlying physical reality.
Given the reality-challenged nature of the LCDM description of the cosmos, with its stable of undetectable entities and events, that indeed appears to be the case. Smolin wants new physics from the same old failed methodology of prioritizing mathematical abstractions over empirical reality.
And so Smolin reifies the relational concept of time as a solution to the correctly perceived crisis in physics? That's like trying to cure a headache by banging your head on the wall.
1. Laser coherent states are a subspace in Hilbert space with a symplectic and classical-like structure. Maybe spacetime is from a quantum mechanical perspective built up from coherent states, or a large N entanglement of states. It might then be the preference for background dependency stems from this.
2. Physical reality does not stem from mathematical formalisms. Mathematical formalisms are supposed to provide reasonably accurate, qualitative descriptions of physical reality. The current standard models do not provide reasonably accurate descriptions of physical reality.
3. The standard model does pretty well. The theory with color and flavor changing interactions, known as weak and strong nuclear, the intertwining of the weak with electromagnetism in hypercharge and the weak angle, the Higgs particle and so forth are pretty much on the money.
The entry here on the muon g - 2 is one place where there may be deviations from the standard model. Of course Bee is right to point out that nature is not that obligated to follow our sense of aesthetics, but the standard model has a very kludge aspect to it. It is not entirely unreasonable to think there is something beyond the standard model, certainly as one reaches quantum gravitation energy.
4. When I say the standard models do not provide reasonably accurate descriptions of physical reality I mean, very specifically, that both models have a significant number of entities and events that are not part of empirical reality. A partial list would include quarks, gluons, the w, Z and Higgs bosons, dark matter, dark energy, substantival space, time and/or spacetime, the inflaton field, and the big bang event itself.
None of these things are observables; they are not part of empirical reality. They are all either axiomatic or model dependent inferences. The models simply do not resemble the physical reality we observe (detect).
If you are going to claim that I don't understand the way modern science works, my response is that I understand how it works quite well. And it is specifically the way it works that I am criticizing.
Even more specifically, I would say that the general methodological approach of modern theoretical science which prioritizes mathematical models over empirical reality is deeply flawed, fundamentally unscientific, and the root cause of the "crisis in physics".
5. I guess I do not understand entirely what you mean. Quarks, gluons, W and Z bosons etc are observable. At least the predicted results of their existence are detected. The presence of dark matter is observed by its gravitational effects. Dark energy is inferred from the accelerated expansion of the universe, and the big bang is inferred from the CMB.
6. @Lawrence Crowell; bud rap: Even gravity is only observed by its effects, we have not found any particles for it. It is a mathematical model.
Does this make our theories of gravity (all mathematical models) fundamentally unscientific?
I don't think so.
No to the first sentence, Yes to the second. Those sentences are mutually exclusive. Observation and inference are not the same thing and they do not carry the same scientific weight. Quarks, gluons, the W, Z and Higgs bosons, are inferred, not observed entities.
Dark matter is a failed hypothesis that was put forth to compensate for the inability of theorists to derive formalisms appropriate to galactic and cosmological scale systems. Keplerian and Newtonian formalisms that were derived in the context of the solar system do not, self-evidently, work.
It is hard to understand why anyone would think that they should work given the completely dissimilar physical structures of the larger-scale systems. Another gift of mathematicism, I suppose. Dark matter is a failed hypothesis because there is no empirical evidence for its existence. It doesn't matter that it makes an inappropriate model(s) work.
Dark energy is inferred from the assumption that a discrepancy between SnIa supernovae luminosity and redshift distances is caused by the acceleration of the "universal" expansion that is, itself, inferred from the 90 year old assumption that the redshift-distance relationship observed by Hubble is caused by some form of recessional velocity.
None of those assumptions and inferences rest on, or are supported by, any empirical evidence. In fact, assumptions and inferences would not be necessary, if there were empirical evidence for the claims being made. As an inferential chain grows longer without empirical evidence it also grows scientifically weaker.
The CMB was a prediction of the big bang model, although various predictions by well-known cosmologists, ranged over an order of magnitude right up until its 1965 discovery. See:,_discovery_and_interpretation
Note that predictions based only on thermodynamic considerations were generally more accurate. "Cosmic Microwave Radiation" is the more accurate empirical description - the "background" designation is just another model-dependent inference.
8. Dr Castaldo,
You are correct, our mathematical models of gravity tell us nothing about the mechanism that produces the gravitational effect. The models' scientific value is as as calculational tools; they have no explanatory value. If modern theoretical research were serious about the "fundamentals" this would be a prime topic of research rather wasting time and money on scientifically inert string theory.
By comparison, I can't think of any useful calculations provided by LCDM, and while QM provides such tools it offers no coherent physical explanation for the outcomes it calculates. It is more unsatisfactory than the gravitational models because it invites endless metaphysical speculations that have no scientific value whatsoever.
24. Whenever I read "Einstein" in a book title, I reach for my water pistol...
Seriously, I do not quite know what to make of Smoling's musings on the foundation of physics. It's all a fair of "perhaps this and perhaps that, and what if this and what if that". Take this statement, for instance (it is by TH, but summarizes well Smolin's approach): "the next step you take after stating your first principles, is an acknowledgment that time is fundamental, real and irreversible".
Now Sabine, be honest. If someone had posted a comment like this on your blog, he/she would have got one of your snappy responses: "well then, go and build a mathematically consistent theory of time, publish it, and win a Nobel prize". Smolin is just more articulate and experienced and he can drag on for a whole book about it (I've read Time reborn), but in the end he is no better than your average layperson comment on this blog.
I had appreciated "The trouble with physics", which was right in spurring physicists not to ignore the the current problems of foundational theories. But since, I have the impression Smolin is just trying to compensate for his lack of concrete ideas with some woolly "philosophical" thinking.
1. You need to read Smolin's peer-reviewed publications. His real ensemble theory does *attempt* to build a mathematically consistent theory of QM within a non-reversible model of time. I applaud that effort, and his explicit commitments to "naive realism" and empirical test. It's an effort that is in some ways heroic, or perhaps quixotic, inasmuch as physicists continue to get funding to work the same failed paradigms. As long as they do so, there will be no kuhnian crisis to remedy.
Since cold war institutionalization and bureaucratization of funding controlled by those who benefit from these same funding mechanisms, it is much more difficult to tip academic communities into recognition of crisis. Smolin and a few others recognize the crisis and make halting attempts to remedy it. The new book's epilog is a poignant recitation of regret that he did not do more.
25. It is anything but trivial to regard space and time as physical "objects". Space and time are primarily "order patterns of the mind". In order to "create" physics from these mind-patterns, a phenomenological examination and explanation are absolutely necessary. To really start over here, no one is able to do that using existing formalisms primarily based on mathematics (needs).
An example of a really different approach: The “secret” of very weak gravitation in relation to the electrical interaction and strong interaction is based on the false assumption that there is generally a mass decoupled space existing. If one considers inherent space that elementary particles and macroscopic bodies contain, by their object expansions and by their radius of interactions, then it becomes clear that the "missing" energy is (in) the space itself. To create space(-energy) space-coupled mass(-energy) must be transformed in such. One consequence, inflation is ruled out. Another consequence of the observed “real object physics” is, the “massless concept” of the SM is ruled out.
26. I like Lee Smolin's natural philosopher approach, we really should bring physical intuition and philosophy back to physics. Symmetry tricks work well up to a point,but they abstract out the fundamental concepts that underlie the symmetry. I know this sounds like a rant against Copenhagenism, but Lee is correct that our historical path through physics has been most successful when physical intuition has been our guide. The worst thing that can happen to physics (and I think it already has) is that we discover a beautiful mathematical formula from abstract concepts that has limited predictive power (because we don't understand it's fundamental concepts). Perturbative analysis coupled with renormalization are the enemies of progress in understanding nature.
27. @Bio_interloper
As to "work (on) the same failed paradigms": those people like Smolin who, for almost a century now, have been trying unsuccessfully to build some consistent relist model of QM are a clear example of "failed paradigm".
As to the first part of your comment, I agree, I checked the references, and indeed he has published on this. Whether it is interesting, I do not know yet.
I wonder how these considerations about irreversible time compare with work done by Prigogine a few decades ago. By the way, tPrigogine's work was completely ignored by other physicists and long forgotten now.
Comment moderation on this blog is turned on. |
7d48fff9a4005bde | Friday, June 28, 2019
Quantum Supremacy: What is it and what does it mean?
Wednesday, June 26, 2019
Win a free copy of "Lost in Maths" in French
My book “Lost in Math: How Beauty Leads Physics Astray” was recently translated to French. Today is your chance to win a free copy of the French translation! The first three people who submit a comment to this blogpost with a brief explanation of why they are interested in reading the book will be the lucky winners.
The only entry requirement is that you must be willing to send me a mailing address. Comments submitted by email or left on other platforms do not count because I cannot compare time-stamps.
Update: The books are gone.
Monday, June 24, 2019
30 years from now, what will a next larger particle collider have taught us?
The year is 2049. CERN’s mega-project, the Future Circular Collider (FCC), has been in operation for 6 years. The following is the transcript of an interview with CERN’s director, Johanna Michilini (JM), conducted by David Grump (DG).
DG: “Prof Michilini, you have guided CERN through the first years of the FCC. How has your experience been?”
JM: “It has been most exciting. Getting to know a new machine always takes time, but after the first two years we have had stable performance and collected data according to schedule. The experiments have since seen various upgrades, such as replacing the thin gap chambers and micromegas with quantum fiber arrays that have better counting rates and have also installed… Are you feeling okay?”
DG: “Sorry, I may have briefly fallen asleep. What did you find?”
JM: “We have measured the self-coupling of a particle called the Higgs-boson and it came out to be 1.2 plus minus 0.3 times the expected value which is the most amazing confirmation that the universe works as we thought in the 1960s and you better be in awe of our big brains.”
DG: “I am flat on the floor. One of the major motivations to invest into your institution was to learn how the universe was created. So what can you tell us about this today?”
JM: “The Higgs gives mass to all fundamental particles that have mass and so it plays a role in the process of creation of the universe.”
DG: “Yes, and how was the universe created?”
JM: “The Higgs is a tiny thing but it’s the greatest particle of all. We have built a big thing to study the tiny thing. We have checked that the tiny thing does what we thought it does and found that’s what it does. You always have to check things in science.”
JM: “You already said that.”
DG: “Well isn’t it correct that you wanted to learn how the universe was created?”
JM: “That may have been what we said, but what we actually meant is that we will learn something about how nuclear matter was created in the early universe. And the Higgs plays a role in that, so we have learned something about that.”
DG: “I see. Well, that is somewhat disappointing.”
JM: “If you need $20 billion, you sometimes forget to mention a few details.”
DG: “Happens to the best of us. All right, then. What else did you measure?”
JM: “Ooh, we measured many many things. For example we improved the precision by which we know how quarks and gluons are distributed inside protons.”
DG: “What can we do with that knowledge?”
JM: “We can use that knowledge to calculate more precisely what happens in particle colliders.”
DG: “Oh-kay. And what have you learned about dark matter?”
JM: “We have ruled out 22 of infinitely many hypothetical particles that could make up dark matter.”
DG: “And what’s with the remaining infinitely many hypothetical particles?”
JM: “We are currently working on plans for the next larger collider that would allow us to rule out some more of them because you just have to look, you know.”
DG: “Prof Michilini, we thank you for this conversation.”
Thursday, June 20, 2019
Away Note
I'll be in the Netherlands for a few days to attend a workshop on "Probabilities in Cosmology". Back next week. Wish you a good Summer Solstice!
Wednesday, June 19, 2019
LHC magnets. Image: CERN.
Tuesday, June 18, 2019
Imagine an unknown disease spreads, causing temporarily blindness. Most patients recover after a few weeks, but some never regain eyesight. Scientists rush to identify the cause. They guess the pathogen’s shape and, based on this, develop test-stripes and antigens. If one guess doesn’t work, they’ll move on to the next.
Doesn’t quite sound right? Of course it does not. Trying to identifying pathogens by guesswork is sheer insanity. The number of possible shapes is infinite. The guesses will almost certainly be wrong. No funding agency would pour money into this.
Except they do. Not for pathogen identification, but for dark matter searches.
In the past decades, the searches for the most popular dark matter particles have failed. Neither WIMPs nor axions have shown up in any detector, of which there have been dozens. Physicists have finally understood this is not a promising method. Unfortunately, they have not come up with anything better.
Instead, their strategy is now to fund any proposed experiment that could plausibly be said to maybe detect something that could potentially be a hypothetical dark matter particle. And since there are infinitely many such hypothetical particles, we are now well on the way to building infinitely many detectors. DNA, carbon nanotubes, diamonds, old rocks, atomic clocks, superfluid helium, qubits, Aharonov-Bohm, cold atom gases, you name it. Let us call it the equal opportunity approach to dark matter search.
As it should be, everyone benefits from the equal opportunity approach. Theorists invent new particles (papers will be written). Experimentalists use those invented particles as motivation to propose experiments (more papers will be written). With a little luck they get funding and do the experiment (even more papers). Eventually, experiments conclude they didn’t find anything (papers, papers, papers!).
In the end we will have a lot of papers and still won’t know what dark matter is. And this, we will be told, is how science is supposed to work.
Let me be clear that I am not strongly opposed to such medium scale experiments, because they typically cost “merely” a few million dollars. A few millions here and there don’t put overall progress at risk. Not like, say, building a next larger collider would.
So why not live and let live, you may say. Let these physicists have some fun with their invented particles and their experiments that don’t find them. What’s wrong with that?
What’s wrong with that (besides the fact that a million dollars is still a million dollars) is that it will almost certainly lead nowhere. I don’t want to wait another 40 years for physicists to realize that falsifiability alone is not sufficient to make a hypothesis promising.
My disease analogy, as any analogy, has its shortcomings of course. You cannot draw blood from a galaxy and put it under a microscope. But metaphorically speaking, that’s what physicists should do. We have patients out there: All those galaxies and clusters which are behaving in funny ways. Study those until you have good reason to think you know what’s the pathogen. Then, build your detector.
Not all types of dark matter particles do an equally good job to explain structure formation and the behavior of galaxies and all the other data we have. And particle dark matter is not the only explanation for the observations. Right now, the community makes no systematic effort to identify the best model to fit the existing data. And, needless to say, that data could be better, both in terms of sky coverage and resolution.
The equal opportunity approach relies on guessing a highly specific explanation and then setting out to test it. This way, null-results are a near certainty. A more promising method is to start with highly non-specific explanations and zero in on the details.
The failures of the past decades demonstrate that physicists must think more carefully before commissioning experiments to search for hypothetical particles. They still haven’t learned the lesson.
Sunday, June 16, 2019
Book review: “Einstein’s Unfinished Revolution” by Lee Smolin
Einstein’s Unfinished Revolution: The Search for What Lies Beyond the Quantum
By Lee Smolin
Penguin Press (April 9, 2019)
Popular science books cover a spectrum from exposition to speculation. Some writers, like Chad Orzel or Anil Ananthaswamy, stay safely on the side of established science. Others, like Philip Ball in his recent book, keep their opinions to the closing chapter. I would place Max Tegmark’s “Mathematical Universe” and Lee Smolin’s “Trouble With Physics” somewhere in the middle. Then, on the extreme end of speculation, we have authors like Roger Penrose and David Deutsch who use books to put forward ideas in the first place. “Einstein’s Unfinished Revolution” lies on the speculative end of this spectrum.
Lee is very upfront about the purpose of his writing. He is dissatisfied with the current formulation of quantum mechanics. It sacrifices realism, and he thinks this is too much to give up. In the past decades, he has therefore developed his own approach to quantum mechanics, the “ensemble interpretation”. His new book lays out how this ensemble interpretation works and what its benefits are.
Before getting to this, Lee introduces the features of quantum theories (superpositions, entanglement, uncertainty, measurement postulate, etc) and discusses the advantages and disadvantages of the major interpretations of quantum mechanics (Copenhagen, many worlds, pilot wave, collapse models). He deserves applause for also mentioning the Montevideo interpretation and superdeterminism, though clearly he doesn’t like either. I have found his evaluation of these approaches overall balanced and fair.
In the later chapters, Lee comes to his own ideas about quantum mechanics and how these tie together with his other work on quantum gravity. I have not been able to follow all his arguments here, especially not on the matter of non-locality.
Unfortunately, Lee doesn’t discuss his ensemble interpretation half as critically the other approaches. From reading his book you may get away with the impression he has solved all problems. Let me therefore briefly mention the most obvious shortcomings of his approach. (a) To quantify the similarity of two systems you need to define a resolution. (b) This will violate Lorentz-invariance which means it’s hard to make compatible with standard model physics. (c) You better not ask about virtual particles. (d) If a system gets its laws from precedents, where do the first laws come from? Lee tells me that these issues have been discussed in the papers he lists on his website.
As all of Lee’s previous books, this one is well-written and engaging, and if you liked Lee’s earlier books you will probably like this one too. The book has the occasional paragraph that I think will be over many reader’s head, but most of it should be understandable with little or no prior knowledge. I have found this book particularly valuable for spelling out the author’s philosophical stance. You may not agree with Lee, but at least you know where he is coming from.
This book is recommendable for anyone who is dissatisfied with the current formulation of quantum mechanics, or who wants to understand why others are dissatisfied with it. It also serves well as a quick introduction to current research in the foundations of quantum mechanics.
[Disclaimer: free review copy.]
Thursday, June 13, 2019
Physicists are out to unlock the muon’s secret
Fermilab g-2 experiment.
[Image Glukicov/Wikipedia]
Physicists count 25 elementary particles that, for all we presently know, cannot be divided any further. They collect these particles and their interactions in what is called the Standard Model of particle physics.
But the matter around us is made of merely three particles: up and down quarks (which combine to protons and neutrons, which combine to atomic nuclei) and electrons (which surround atomic nuclei). These three particles are held together by a number of exchange particles, notably the photon and gluons.
What’s with the other particles? They are unstable and decay quickly. We only know of them because they are produced when other particles bang into each other at high energies, something that happens in particle colliders and when cosmic rays hit Earth’s atmosphere. By studying these collisions, physicists have found out that the electron has two bigger brothers: The muon (μ) and the tau (τ).
The muon and the tau are pretty much the same as the electron, except that they are heavier. Of these two, the muon has been studied closer because it lives longer – about 2 x 10-6 seconds.
The muon turns out to be... a little odd.
Physicists have known for a while, for example, that cosmic rays produce more muons than expected. This deviation from the predictions of the standard model is not hugely significant, but it has stubbornly persisted. It has remained unclear, though, whether the blame is on the muons, or the blame is on the way the calculations treat atomic nuclei.
Next, the muon (like the electron and tau) has a partner neutrino, called the muon-neutrino. The muon neutrino also has some anomalies associated with it. No one currently knows whether those are real or measurement errors.
The Large Hadron Collider has seen a number of slight deviations from the predictions of the standard model which go under the name lepton anomaly. They basically tell you that the muon isn’t behaving like the electron, which (all other things equal) really it should. These deviations may just be random noise and vanish with better data. Or maybe they are the real thing.
And then there is the gyromagnetic moment of the muon, usually denoted just g. This quantity measures how muons spin if you put them into a magnetic field. This value should be 2 plus quantum corrections, and the quantum corrections (the g-2) you can calculate very precisely with the standard model. Well, you can if you have spent some years learning how to do that because these are hard calculations indeed. Thing is though, the result of the calculation doesn’t agree with the measurement.
This is the so-called muon g-2 anomaly, which we have known about since the 1960s when the first experiments ran into tension with the theoretical prediction. Since then, both the experimental precision as well as the calculations have improved, but the disagreement has not vanished.
The most recent experimental data comes from a 2006 experiment at Brookhaven National Lab, and it placed the disagreement at 3.7σ. That’s interesting for sure, but nothing that particle physicists get overly excited about.
A new experiments is now following up on the 2006 result: The muon g-2 experiment at Fermilab. The collaboration projects that (assuming the mean value remains the same) their better data could increase the significance to 7σ, hence surpassing the discovery standard in particle physics (which is somewhat arbitrarily set to 5σ).
For this experiment, physicists first produce muons by firing protons at a target (some kind of solid). This produces a lot of pions (composites of two quarks) which decay by emitting muons. The muons are then collected in a ring equipped with magnets in which they circle until they decay. When the muons decay, they produce two neutrinos (which escape) and a positron that is caught in a detector. From the direction and energy of the positron, one can then infer the magnetic moment of the muon.
The Fermilab g-2 experiment, which reuses parts of the hardware from the earlier Brookhaven experiment, is already running and collecting data. In a recent paper, Alexander Keshavarzi, on behalf of the collaboration reports they successfully completed the first physics run last year. He writes we can expect a publication of the results from the first run in late 2019. After some troubleshooting (something about an underperforming kicker system), the collaboration is now in the second run.
Another experiment to measure more precisely the muon g-2 is underway in Japan, at the J-PARC muon facility. This collaboration too is well on the way.
While we don’t know exactly when the first data from these experiements will become available, it is clear already that the muon g-2 will be much talked about in the coming years. At present, it is our best clue for physics beyond the standard model. So, stay tuned.
Wednesday, June 12, 2019
[Tam Hunt sent me another lengthy interview, this time with Lee Smolin. Smolin is a faculty member at the Perimeter Institute for Theoretical Physics in Canada and adjunct professor at the University of Waterloo. He is one of the founders of loop quantum gravity. In the past decades, Smolin’s interests have drifted to the role of time in the laws of nature and the foundations of quantum mechanics.]
TH: You make some engaging and bold claims in your new book, Einstein’s Unfinished Revolution, continuing a line of argument that you’ve been making over the course of the last couple of decades and a number of books. In your latest book, you argue essentially that we need to start from scratch in the foundations of physics, and this means coming up with new first principles as our starting point for re-building. Why do you think we need to start from first principles and then build a new system? What has brought us to this crisis point?
LS: The claim that there is a crisis, which I first made in my book, Life of the Cosmos (1997), comes from the fact that it has been decades since a new theoretical hypothesis was put forward that was later confirmed by experiment. In particle physics, the last such advance was the standard model in the early 1970s; in cosmology, inflation in the early 1980s. Nor has there been a completely successful approach to quantum gravity or the problem of completing quantum mechanics.
I propose finding new fundamental principles that go deeper than the principles of general relativity and quantum mechanics. In some recent papers and the book, I make specific proposals for new principles.
TH: You have done substantial work yourself in quantum gravity (loop quantum gravity, in particular) and quantum theory (suggesting your own interpretation called the “real ensemble interpretation”), and yet in this new book you seem to be suggesting that you and everyone else in foundations of physics needs to return to the starting point and rebuild. Are you in a way repudiating your own work or simply acknowledging that no one, including you, has been able to come up with a compelling approach to quantum gravity or other outstanding foundations of physics problems?
LS: There are a handful of approaches to quantum gravity that I would call partly successful. These each achieve a number of successes, which suggest that they could plausibly be at least part of the story of how nature reconciles quantum physics with space, time and gravity. It is possible, for example that these partly successful approaches model different regimes or phases of quantum gravity phenomena. These partly successful approaches include loop quantum gravity, string theory, causal dynamical triangulations, causal sets, asymptotic safety. But I do not believe that any approach to date, including these, is fully successful. Each has stumbling blocks that after many years remain unsolved.
TH: You part ways with a number of other physicists in recent years who have railed against philosophy and philosophers of physics as being largely unhelpful for actual physics. You argue instead that philosophers have a lot to contribute to the foundations of physics problems that are your focus. Have you found philosophy helpful in pursuing your physics for most of your career or is this a more recent finding in your own work? Which philosophers, in particular, do you think can be helpful in this area of physics?
LS: I would first of all suggest we revive the old idea of a natural philosopher, which is a working scientist who is inspired and guided by the tradition of philosophy. An education and immersion in the philosophical tradition gives them access to the storehouse of ideas, positions and arguments that have been developed over the centuries to address the deepest questions, such as the nature of space and time.
Physicists who are natural philosophers have the advantage of being able to situate their work, and its successes and failures, within the long tradition of thought about the basic questions.
Most of the key figures who transformed physics through its history have been natural philosophers: Galileo, Newton, Leibniz, Descartes, Maxwell, Mach, Einstein, Bohr, Heisenberg, etc. In more recent years, David Finkelstein is an excellent example of a theoretical physicist who made important advances, such as being the first to untangle the geometry of a black hole, and recognize the concept of an event horizon, who was strongly influenced by the philosophical tradition. Like a number of us, he identified as a follower of Leibniz, who introduced the concepts of relational space and time.
The abstract of Finkelstein’s key 1958 paper on what were soon to be called black holes explicitly mentions the principle of sufficient reason, which is the central principle of Leibniz’s philosophy. None of the important developments of general relativity in the 1960s and 1970s, such as those by Penrose, Hawking, Newmann, Bondi, etc., would have been possible without that groundbreaking paper by Finkelstein.
I asked Finkelstein once why it was important to know philosophy to do physics, and he replied, “If you want to win the long jump, it helps to back up and get a running start.”’
In other fields, we can recognize people like Richard Dawkins, Daniel Dennett, Lynn Margulis, Steve Gould, Carl Sagan, etc. as natural philosophers. They write books that argue the central issues in evolutionary theory, with the hope of changing each other’s minds. But we the lay public are able to read over their shoulders, and so have front row seats to the debates.
There are also working now a number of excellent philosophers of physics, who contribute in important ways to the progress of physics. One example of these is a group, centred originally at Oxford, of philosophers who have been doing the leading work on attempting to make sense of the Many Worlds formulation of quantum mechanics. This work involves extremely subtle issues such as the meaning of probability. These thinkers include Simon Saunders, David Wallace, Wayne Mhyrvold; and there are equally good philosophers who are skeptical of this work, such as David Albert and Tim Maudlin.
It used to be the case, half a century ago, that philosophers, such as Hilary Putnam, who opined about physics, felt qualified to do so with a bare knowledge of the principles of special relativity and single particle quantum mechanics. In that atmosphere my teacher Abner Shimony, who had two Ph.D’s – one in physics and one in philosophy – stood out, as did a few others who could talk in detail about quantum field theory and renormalization, such as Paul Feyerabend. Now the professional standard among philosophers of physics requires a mastery of Ph.D level physics, as well as the ability to write and argue with the rigour that philosophy demands. Indeed, a number of the people I just mentioned have Ph.D’s in physics.
TH: One of your suggested hypotheses, the next step you take after stating your first principles, is an acknowledgment that time is fundamental, real and irreversible, effectively goring one of the sacred cows of modern physics. You made your case for this approach in your book Time Reborn and I'm curious if you've seen a softening over the last few years in terms of physicists and philosophers beginning to be more open to the idea that the passage of time is truly fundamental? Also, why wouldn't this hypothesis be instead a first principle, if time is indeed fundamental?
LS: In my experience, there have always been physicists and philosophers open to these ideas, even if there is no consensus among those who have carefully thought the issues through.
When I thought carefully about how to state a candidate set of basic principles, it became clear that it was useful to separate principles from hypotheses about nature. Principles such as sufficient reason and the identity of the indiscernible can be realized in formulations of physics in which time is either fundamental or secondary and emergent. Hence those principles are prior to the choice of a fundamental or emergent time. So I think it clarifies the logic of the situation to call the latter choice a hypothesis rather than a principle.
TH: How does viewing time as irreversible and fundamental mesh with your principle of background independence? Doesn’t a preferred spacetime foliation, which would provide an irreversible and fundamental time, provide a background?
LS: Background independence is an aspect of the two principles of Leibniz I just referred to: 1) sufficient reason (PSR) and 2) the identity of the indiscernible (PII). Hence it is deeper than the choice of whether time is fundamental or emergent. Indeed, there are theories which rest on both hypotheses about time (fundamental or emergent). Julian Barbour, for example, is a relationalist who develops background-independent theories in which time is emergent. I am also a relationalist, but I make background-independent models of physics in which time and its passage are fundamental.
Viewing time as fundamental and irreversible doesn’t necessarily imply a preferred foliation; by the latter you mean a foliation of a pre-existing spacetime, specified kinematically in advance of the dynamical evolution. In our energetic causal set models there does arise a notion of the present, but this is determined dynamically by the evolution of the model and so is consistent with what we mean by background independence.
The point is that the solutions to background-independent theories can have preferred frames, so long as they are generated by solving the dynamics. This is, for example, the case with cosmological solutions to general relativity.
TH: You and many other physicists have focused for many years on finding a theory of quantum gravity, effectively unifying quantum mechanics and general relativity. In describing your preferred approach to achieving a theory of quantum gravity worthy of the name you describe why you think quantum mechanics is incomplete and why general relativity is in some key ways likely wrong. Let’s look first at quantum mechanics, which you describe as “wrong” and “incomplete.” Why is the Copenhagen (still perhaps the most popular version of quantum theory) school of quantum mechanics wrong and incomplete?
LS: Copenhagen is incomplete because it is based on an arbitrarily chosen division of the world into a classical realm and a quantum realm. This reflects our practice as experimenters, and corresponds to nothing in nature. This means it is an operational approach which conflicts with the expectations that physics should offer a complete description of individual phenomena, with no reference to our existence, knowledge or measurements.
TH: Your objections just stated (what’s known generally as the “measurement problem”) seem to me, even as an obvious non-expert in this area, to be fairly apparent and accurate objections to Copenhagen. If that’s the case, why is Copenhagen still with us today? Why was it ever considered a serious theory?
LS: I don’t think there are many proponents of the Copenhagen view among people working in quantum foundations, or who have otherwise thought about the issues carefully. I don’t think there are many enthusiastic followers of Bohr left alive.
Meanwhile, what most physicists who are not specialists in quantum foundations practice and teach is a very pragmatic, operational set of rules, which suffices because it closely parallels the practice of actual experimenters. They can get on with the physics without having to take a stand on realism.
What Bohr had in mind was a much more radical rejection of realism and its replacement by a view of the world in which nature and us co-create phenomena. My sense is that most living physicists haven’t read Bohr’s actual writings. There are of course some exceptions, like Chris Fuch’s QBism, which is, to the extent that I understand it, an even more radical view. Even if I disagree, I very much admire Chris for the clarity of his thinking and his insistence on taking his view to its logical conclusions. But, in the end, as a realist who sees the necessity of completing quantum mechanics by the discovery of new physics, the intellectual contortions of anti-realists are, however elegant, no help for my projects.
TH: Could this be a good example of why philosophical training could actually be helpful for physicists?
LS: I would agree, in some cases it could be helpful for some physicists to study philosophy, especially if they are interested in discovering deeper foundational laws. But I would never say anyone should study philosophy, because it can be very challenging reading, and if someone is not inclined to think “philosophically” they are unlikely to get much from the effort. But I would say that if someone is receptive to the care and depth of the writing, it can open doors to new ideas and to a highly critical style of thinking, which could greatly aid someone’s research.
The point I would like to make here is rather different. As I discussed in my earlier books, there are different periods in the development of science during which different kinds of problems present themselves. These require different strategies, different educations and perhaps even different styles of research to move forward.
There are pragmatic periods where the laws needed to understand a wide range of phenomena are in place and the opportunities of greatly advancing our understanding of diverse physical phenomena dominate. These kinds of periods require a more pragmatic approach, which ignores whatever foundational issues may be present (and indeed, there are always foundational issues lurking in the background), and focuses on developing better tools to work out the implications of the laws as they stand.
Then there are (to follow Kuhn) revolutionary periods in science, when the foundations are in question and the priority is to discover and express new laws.
The kinds of people and the kinds of education needed to succeed are different in these two kinds of periods. Pragmatic times require pragmatic scientists, and philosophy is unlikely to be important. But foundational periods require foundational people, many of whom will, as in past foundational periods, find inspiration from philosophy. Of course, what I just said is an oversimplification. At all times, science needs a diverse mix of research styles. We always need pragmatic people who are very good at the technical side of science. And we always need at least a few foundational thinkers. But the optimal balance is different in different periods.
The early part of the 20th Century, through around 1930, was a foundational period. That was followed by a pragmatic period during which the foundational issues were ignored and many applications of the quantum mechanics were developed.
Since the late 1970s, physics has been again in a foundational period, facing deep questions in elementary particle physics, cosmology, quantum foundations and quantum gravity. The pragmatic methods which got us to that point no longer suffice; during such a period we need more foundational thinkers and we need to pay more attention to them.
TH: Turning to general relativity, you also don’t mince your words and you describe the notion of reversible time, thought to be at the core of general relativity, as “wrong.” What does general relativity look like with irreversible and fundamental time?
LS: We posed exactly this question: can we invent an extension of general relativity in which time evolution is asymmetric under a transformation that reverses a measure of time. We found two ways to do this.
TH: You touched on consciousness as a physical phenomenon and a necessary ingredient in our physics in your book, Time Reborn (as have many other physicists over the last century, of course). You spend less time on consciousness in your new book — stating “Let us tiptoe past the hard question of consciousness to simpler questions” — but I’m curious if you’ve considered including as a first principle the notion that consciousness is a fundamental aspect of nature (or not) in your ruminations on these deep topics?
LS: I am thinking slowly about the problems of qualia and consciousness, in the rough direction set out in the epilogue of Time Reborn. But I haven’t yet come to conclusions worth publishing. An early draft of Einstein’s Unfinished Revolution had an epilogue entirely devoted to these questions, but I decided it was premature to publish; it also would have distracted attention from the central themes of that book.
TH: David Bohm, one of the physicists you discuss with respect to alternative versions of quantum theory, delved deeply into philosophy and spirituality in relation to his work in physics, as you discuss briefly in your new book. Do you find Bohm’s more philosophical notions such as the Implicate Order (the metaphysical ground of being in which the “explicate” manifest world that we know in our normal every day life is enfolded, and thus “implicate”) helpful for physics?
LS: I am afraid I’ve not understood what Bohm was aiming for in his book on the implicate order, or his dialogues with Krishnamurti, but it is also true that I haven’t tried very hard. I think one can admire greatly the practical and psychological knowledge of Buddhism and related traditions, while remaining skeptical of their more metaphysical teachings.
TH: Bohm’s Implicate Order has much in common with physical notions such as the (nonluminiferous) ether, which has been revived in today’s physics by some heavyweights such as Nobel Prize winner Frank Wilczek (The Lightness of Being: Mass, Ether, and the Unification of Forces) as another term for the set of space-filling fields that underlie our reality. Do you take the idea of reviving some notion of the ether as a physical/metaphysical background at all seriously in your work?
LS: The important part of the idea of the ether was that it is a smooth, fundamental, physical substance, which had the property that vibrations and stresses within it reproduced the phenomena described by Maxwell’s field theory of electromagnetism. It was also important that there was a preferred frame of reference associated with being at rest with respect to this substance.
We no longer believe any part of this. The picture we now have is that any such substance is made of a large collection of atoms. Therefore the properties of any substance are emergent and derivative. I don’t think Frank Wilczek disagrees with this, I suspect he is just being metaphorical.
TH: He doesn’t seem to be metaphorical, writing in a 1999 article:“Quite undeservedly, the ether has acquired a bad name. There is a myth, repeated in many popular presentations and textbooks, that Albert Einstein swept it into the dustbin of history. The real story is more complicated and interesting. I argue here that the truth is more nearly the opposite: Einstein first purified, and then enthroned, the ether concept. As the 20th century has progressed, its role in fundamental physics has only expanded. At present, renamed and thinly disguised, it dominates the accepted laws of physics. And yet, there is serious reason to suspect it may not be the last word.” In his 2008 book mentioned above, he reframes the set of accepted physical fields as “the Grid” (which is “the primary world-stuff”) or ether. Sounds like you don’t find this re-framing very compelling?
LS: What is true is that quantum field theory (QFT) treats all propagating particles and fields as excitations of a (usually unique) vacuum state. This is analogized to the ether, but in my opinion it’s a bad analogy. One big difference is that the vacuum of a QFT is invariant under all the symmetries of nature, whereas the ether breaks many of them by defining a preferred state of at rest.
TH: You consider Bohm’s alternative quantum theory in some depth, and say that “it makes complete sense,” but after further discussion you consider it inadequate because it is generally considered to be incompatible with special relativity, among other problems.
LS: This is not the main reason I don’t think pilot wave theory describes nature.
Pilot wave theory is based on two equations. One, which is the same as in ordinary QM-the Schrödinger equation, propagates the wave-function, while the second-the guidance equation, guides the “particles.” The first can be made compatible with special relativity, while the second cannot. But when one adds an assumption about probabilities, the averages of the guided particles follow the waves and so agree with both ordinary QM and special relativity. In this way you can say that pilot wave theory is “weakly compatible” with special relativity, in the sense that, while there is a preferred sense of rest, it can’t be measured.
TH: If one considers time to be fundamental and irreversible, isn’t there a relativistic version of Bohmian mechanics readily available by adopting some version of Lorentzian or neo-Lorentzian relativity (which are background-dependent)?
LS: Maybe — you are describing research to be done.
TH: Last, how optimistic are you that your view, that today’s physics needs some really fundamental re-thinking, will catch on with the majority of today’s physicists in the next decade or so?
LS: I’m not but I wouldn’t expect any such call for a reconsideration of the basic principles would be popular until it has results which make it hard to avoid thinking about.
Monday, June 10, 2019
Sometimes giving up is the smart thing to do.
[likely image source]
A few years ago I signed up for a 10k race. It had an entry fee, it was a scenic route, and I had qualified for the first group. I was in best shape. The weather forecast was brilliant.
Two days before the race I got a bad cold. But that wouldn’t deter me. Oh, no, not me. I’m not a quitter. I downed a handful of pills and went nevertheless. I started with a fever, a bad cough, and a banging head.
It didn’t go well. After half a kilometer I developed a chest pain. After one kilometer it really hurt. After two kilometers I was sure I’d die. Next thing I recall is someone handing me a bottle of water after the finish line.
Needless to say, my time wasn’t the best.
But the real problem began afterward. My cold refused to clear out properly. Instead I developed a series of respiratory infections. That chest pain stayed with me for several months. When the winter came, each little virus the kids brought home knocked me down.
I eventually went to see a doctor. She sent me to have a chest X-ray taken on the suspicion of tuberculosis. When the X-ray didn’t reveal anything, she put me on a 2 week regime of antibiotics.
The antibiotics indeed finally cleared out whatever lingering infection I had carried away. It took another month until I felt like myself again.
But this isn’t a story about the misery of aging runners. It’s a story about endurance sport of a different type: academia.
In academia we write Perseverance with capital P. From day one, we are taught that pain is normal, that everyone hurts, and that self-motivation is the highest of virtues. In academia, we are all over-achievers.
This summer, as every summer for the past two decades, I receive notes about who is leaving. Leaving because they didn’t get funding, because they didn’t get another position, or because they’re just no longer willing to sacrifice their life for so little in return.
And this summer, as every summer for the past two decades, I find myself among the ones who made it into the next round, find myself sitting here, wondering if I’m worthy and if I’m in the right place doing the right thing at the right time. Because, let us be honest. We all know that success in academia has one or two elements of luck. Or maybe three. We all know it’s not always fair.
I’m writing this for the ones who have left and the ones who are about to leave. Because I have come within an inch of leaving half a dozen times and I have heard the nasty, nagging voice in the back of my head. “Quitter,” it says and laughs, “Quitter.”
Don’t listen. From the people I know who left academia, few have regrets. And the few with regrets found ways to continue some research along with their new profession. The loss isn’t yours. The loss is one for academia. I understand your decision and I think you choose wisely. Just because everyone you know is on a race to nowhere doesn’t mean going with them makes sense. Sometimes, giving up is the smart thing to do.
A year after my miserable 10k experience, I signed up for a half-marathon. A few kilometers into the race, I tore a muscle.
I don’t get a runner’s high, but running increases my pain tolerance to unhealthy levels. After a few kilometers, you could probably stab me in the back and I wouldn’t notice. I could well have finished that race. But I quit.
Saturday, June 08, 2019
Book Review: “Beyond Weird” by Philip Ball
By Philip Ball
University of Chicago Press (October 18, 2018)
I avoid popular science articles about quantum mechanics. It’s not that I am not interested, it’s that I don’t understand them. Give me a Hamiltonian, a tensor-product expansion, and some unitary operators, and I can deal with that. But give me stories about separating a cat from its grin, the many worlds of Wigner’s friend, or suicides in which you both die and not die, and I admit defeat on paragraph two.
Ball is guilty of some of that. I got lost half through his explanation how a machine outputs plush cats and dogs when Alice and Bob put in quantum coins, and still haven’t figured out why the seer’s daughter wanted to be wed to a man evidently more stupid than she.
But then, clearly, I am not the book’s intended audience, so let me instead tell you something more helpful.
Ball knows what he writes about, that’s obvious from page one. For all I can tell the science in his book is flawless. It is also engagingly told, with some history but not too much, with some reference to current research, but not too much, with some philosophical discourse but not too much. Altogether, it is a well-balanced mix that should be understandable for everyone, even those without prior knowledge of the topic. And I entirely agree with Ball that calling quantum mechanics “weird” or “strange” isn’t helpful.
In “Beyond Weird,” Ball does a great job sorting out the most common confusions about quantum mechanics, such as that it is about discretization (it is not), that it defies the speed of light limit (it does not), or that it tells you something about consciousness (huh?). Ball even cleans up with the myth that Einstein hated quantum mechanics (he did not), Feynman dubbed the Copenhagen interpretation “Shut up and calculate” (he did not, also, there isn’t really such a thing as the Copenhagen interpretation), and, best of all, clears out the idea that many worlds solves the measurement problem (it does not).
In Ball’s book, you will learn just what quantum mechanics is (uncertainty, entanglement, superpositions, (de)coherence, measurement, non-locality, contextuality, etc), what the major interpretations of quantum mechanics are (Copenhagen, QBism, Many Worlds, Collapse models, Pilot Waves), and what the currently discussed issues are (epistemic vs ontic, quantum computing, the role of information).
As someone who still likes to read printed books, let me also mention that Ball’s is just a pretty book. It’s a high quality print in a generously spaced and well-readable font, the chapters are short, and the figures are lovely, hand-drawn illustrations. I much enjoyed reading it.
It is also remarkable that “Beyond Weird” has little overlap with two other recent books on quantum mechanics which I reviewed: Chad Orzel’s “Breakfast With Einstein” and Anil Ananthaswamy’s “Through Two Doors At Once.” While Ball focuses on the theory and its interpretation, Orzel’s book is about applications of quantum mechanics, and Ananthaswamy’s is about experimental milestones in the development and understanding of the theory. The three books together make an awesome combination.
And luckily the subtitle of Philip Ball’s book turned out to be wrong. I would have been disturbed indeed had everything I thought I knew about quantum physics been different.
[Disclaimer: Free review copy.]
Related: Check out my list of 10 Essentials of Quantum Mechanics.
Wednesday, June 05, 2019
If we spend money on a larger particle collider, we risk that progress in physics stalls.
[Image: CERN]
Particle physicists have a problem. For 40 years they have been talking about new particles that never appeared. The Large Hadron Collider was supposed to finally reveal them. It didn’t. This $10 billion machine has found the Higgs-boson, thereby completing the standard model of particle physics, but no other fundamentally new particles.
With this, the Large Hadron Collider (LHC) has demonstrated that arguments used by particle physicists for the existence of new particles beyond those in the standard model were wrong. With these arguments now falsified, there is no reason to think that a next larger particle collider will do anything besides measuring the parameters of the standard model to higher precision. And with the cost of a next larger collider estimated at $20 billion or so, that’s a tough sell.
Particle physicists have meanwhile largely given up spinning stories about discovering dark matter or recreating the origin of the universe, because it is clear to everyone now that this is marketing one cannot trust. Instead, they have a new tactic which works like this.
First, they will refuse to admit anything went wrong in the past. They predicted all these particles, none of which was seen, but now they won’t mention it. They hyped the LHC for two decades, but now they act like it didn’t happen. The people who previously made wrong predictions cannot be bothered to comment. Except for those like Gordon Kane and Howard Baer, who simply make new predictions and hope you have forgotten they ever said anything else.
Second, in case they cannot get away with outright denial, they will try to convince you it is somehow interesting they were wrong. Indeed, it is interesting – if you are a sociologist. A sociologist would be thrilled to see such an amazing example of groupthink, leading a community of thousands of intelligent people to believe that relying on beauty is a good method to make predictions. But as far as physics is concerned, there’s nothing to learn here, except that beauty isn’t a scientific criterion, which is hardly a groundbreaking insight.
Third, they will sure as hell not touch the question whether there might be better ways to invest the money, because that can only work to their disadvantage. So they will tell you vague tales about the need to explore nature, but not ever discuss whether other methods to explore nature would advance science more.
But fact is, building a large particle collider presently has a high cost for little expected benefit. This money would be better invested into less costly experiments with higher discovery potential, such as astrophysical searches for dark matter (I am not talking about direct detection experiments), table-top searches for quantum gravity, 21cm astronomy, gravitational wave interferometers, high-precision but low-energy measurements, just to mention a few.
And that is only considering the foundations of physics, leaving aside the overarching question of societal benefit. $20 billion that go into a particle collider are $20 billion that do not go into nuclear fusion, drug development, climate science, or data infrastructure, all of which can be reasonably expected to have a larger return on investment. At the very least it is a question one should discuss.
Add to this that the cost for a larger particle collider could dramatically go down in the next 20-30 years with future technological advances, such as wake-field acceleration or high-temperature superconductors. In the current situation, with colliders so extremely costly, it makes economically more sense to wait if one of these technologies reaches maturity. Who wants to spend some billions digging a 100km tunnel when that tunnel may no longer be necessary by the time the collider could be be in operation?
Anyone who talks about building a larger particle collider, but who does not mention the above named issues demonstrates that they neither care about progress in physics nor about social responsibility. They do not want to have a sincere discussion. Instead, they are presenting a one-sided view. They are merely lobbying.
If you encounter any such person, I recommend you ask them the following: Why were all these predictions wrong and what have particle physicists learned from it? Why is a larger particle collider a good way to invest such large amounts of money in the foundations of physics now? What is the benefit of such an investment for society?
And do not take as response arguments about benefiting collaborations, scientific infrastructure, or education, because such arguments can be made in favor of any large investment into science. Such generic arguments do not explain why a particle collider in particular is the thing to do. I have a handy list with responses to further nonsense arguments here.
A prediction. If you give particle physicists money for a next larger collider this is what will happen: This money will be used to hire more people who will tell you that particle physics is great. They will continue to invent new particles according to some new fad, and then claim they learned something when their expensive machine falsifies these inventions. In 40 years, we will still not know what dark matter is made of or how to quantize gravity. We will still not have a working fusion reactor, will still not have quantum computers, and will still have group-think in science. Particle physicists will then begin to argue they need a larger collider. Rinse and repeat.
Of course it is possible that a larger collider will find something new. The only way to find out with certainty is to build it and look. But the same “Just Look” argument can be made about any experiment that explores new frontiers. Point is: Particle physicists have so far failed to come up with any reason why going to higher energies is currently a promising route forward. The conservative expectation therefore is that the next larger collider would be much like the LHC, but for twice the price and without the Higgs.
Particle physics is a large and very influential community. Do not fall for their advertisements. Ask the hard questions.
Monday, June 03, 2019
The multiverse hypothesis: Are there other universes besides our own?
You are one of some billion people on this planet. This planet is one of some hundred billion planets in this galaxy. This galaxy is one of some hundred billion galaxies in the universe. Is our universe the only one? Or are there other universes?
In the past decades, the idea that our universe is only one of many, has become popular among physicists. If there are several universes, their collection is called the “multiverse”, and physicists have a few theories for this that I want to briefly tell you about.
1. Eternal Inflation.
We do not know how our universe was created and maybe we will never know. But according to a presently popular theory, called “inflation”, our universe was created from a quantum fluctuation of a field called the “inflaton”. In this case, there would be infinitely many such fluctuations giving rise to infinitely many universes. This process of universe-creation never stops, which is why it is called eternal inflation.
These other universes may contain the same matter as ours, but in different arrangements, or they may contain different types of matter. They may have the same laws of nature, or entirely different laws. Really, pretty much anything goes, as long as you have space, time, and matter.
2. The String Theory Landscape
The string theory landscape came out of the realization that string theory does not, as originally hoped, uniquely predict the laws of nature we observe. Instead, the theory allows for many different laws of nature, that would give rise to universes different from our own. The idea that all of them exist goes together well with eternal inflation, and so, the two theories are often lumped together.
3. Many Worlds
Many Worlds is an interpretation of quantum mechanics. In quantum mechanics, we can make predictions only for probabilities. We can say, for example, a particle goes left or right, each with 50% probability. But then, when we measure it, we find it either left or right. And then we know where it is with 100% probability. So what happened with the other option?
The most common attitude you find among physicists is who cares? We are here and that’s what we have measured, now let’s move on.
The many worlds interpretation, however, postulates that all possible outcomes of an experiment exist, each in a separate universe. It’s just that we happen to live in only one of those universes, and never see the other ones.
4. The Simulation Hypothesis
Video games are getting better by the day, and it’s easy to imagine that maybe one day they will be so good we can no longer tell apart the virtual world and the real world.
This brings up the question whether maybe we already live in a virtual world, one that is programmed by some being more intelligent than us and technologically ahead? If that is so, there is no reason to think that our universe is the only simulation that is going on. There may be many other universe simulations, programmed by superintelligent beings. This, too, is a variant of the multiverse.
5. The Mathematical Universe
Finally, let me briefly mention the idea, popularized by Max Tegmark, that all of mathematics exists, and that we merely observe a very small part of it. It is this small part of mathematics that we call our universe.
Are these theories science? Or are they fiction? Let me know what you think.
Does God exist? Science does not have an answer.
|
0bbc09a3e57ba563 | Skip to main content
Physics LibreTexts
4.2: Cartesian Symmetry
• Page ID
• Separation of Variables
Potentials with particular properties encourage us to separate variables in the cartesian coordinate system, so let's look at how this works. We seek solutions to the stationary-state Schrödinger equation that admit wave functions which can be written as a product of three functions of single variables:
\[ \psi_E\left(x,y,z\right) = X\left(x\right)Y\left(y\right)Z\left(z\right) \]
Plugging this into the stationary-state Schrödinger equation, and dividing the whole equation by the wave function separates it into terms that are functions of only \(x\), \(y\), and \(z\), along with a potential that so far we have not restricted:
\[ -\dfrac{\hbar^2}{2m}\left(\dfrac{\partial^2}{\partial x^2} + \dfrac{\partial^2}{\partial y^2} + \dfrac{\partial^2}{\partial z^2}\right)X\left(x\right)Y\left(y\right)Z\left(z\right) + V\left(x,y,z\right)X\left(x\right)Y\left(y\right)Z\left(z\right)= E\;X\left(x\right)Y\left(y\right)Z\left(z\right) \\ \Rightarrow \;\;\; \dfrac{1}{X}\dfrac{\partial^2X}{\partial x^2} + \dfrac{1}{Y}\dfrac{\partial^2Y}{\partial y^2} +\dfrac{1}{Z}\dfrac{\partial^2Z}{\partial z^2} -\dfrac{2m}{\hbar^2}V\left(x,y,z\right) = -\dfrac{2mE}{\hbar^2} \]
For this method to be useful, we need to be able to separate the entire equation into a sum of terms that are exclusively functions of one variable at a time (\(x\), \(y\), or \(z\)). To see how this works, consider the following:
\[ f\left(x\right) + g\left(y\right) + h\left(z\right) = constant \]
We can plug any \(x\) value we like into \(f\left(x\right)\), without changing any of the \(y\) or \(z\) values. For this equation to remain correct, it must mean that \(f\left(x\right)\) is the same constant value for all choices of \(x\). The same argument can be made for \(g\left(y\right)\) and \(h\left(z\right)\). This gives us three separate equations, each in a single variable.
The first three terms are already separated into \(x\), \(y\), and \(z\), but the potential function poses a problem. This method is only really effective in two cases: When the potential is a constant (universally, or piecewise), or when it splits up into a sum of functions of single variables. We will look at examples of all these cases.
Free Particle
If the potential is universally constant, then the particle is obviously free. As with the one-dimensional free particle, the stationary-state Schrödinger's equation gives us wave functions of energy eigenstates, but we can filter-out the momentum eigenstates (plane waves) if we wish. Plugging zero in for the potential allows us to separate the equation into three differential equations in single variables. The choices for the form of the constants below will become quickly apparent:
\[ \dfrac{1}{X}\dfrac{d^2X}{dx^2}=-k_x^2,\;\;\;\;\;\dfrac{1}{Y}\dfrac{d^2Y}{dy^2}=-k_y^2,\;\;\;\;\dfrac{1}{Z}\dfrac{d^2Z}{dz^2}=-k_z^2,\;\;\;\;\; k_x^2+k_y^2+k_z^2=-\dfrac{2mE}{\hbar^2} \]
The plane wave solutions of each of the separate differential equations are:
\[ X\left(x\right)=A_xe^{ik_x x},\;\;\;\;\; Y\left(y\right)=A_ye^{ik_y y},\;\;\;\;\; Z\left(z\right)=A_ze^{ik_z z} \]
Reconstructing the full momentum eigenstate wave function, we get:
\[ \psi_k\left(x,y,z\right) = X\left(x\right)Y\left(y\right)Z\left(z\right) = Ae^{i\left(k_x x + k_y y + k_z z\right)}\]
If we define the momentum vector \(\overrightarrow p\) in terms of a wave vector \(\overrightarrow k = k_x \widehat i + k_y \widehat j + k_z \widehat k\), we get the rather compact plane wave solution moving in a specific direction:
\[ \psi_k\left(\overrightarrow r\right) = Ae^{i\overrightarrow k \cdot \overrightarrow r},\;\;\;\;\;\;\;\; \overrightarrow k = \dfrac{\overrightarrow p}{\hbar},\;\;\;\;\;\;\;\; E=\dfrac{p^2}{2m}=\dfrac{\hbar^2}{2m}\left(\overrightarrow k \cdot \overrightarrow k\right) \]
This plane wave is also an energy eigenstate, so the full time-dependent wave function can also be written:
\[ \psi_k\left(\overrightarrow r, t\right) = Ae^{i\left(\overrightarrow k \cdot \overrightarrow r - \omega t\right)},\;\;\;\;\;\;\;\; \omega = \dfrac{E}{\hbar} \]
Particle in a 3D Infinite Square Well
The infinite square well in three dimensions has the same property as the one-dimensional box that the potential is zero everywhere inside, and instantly becomes infinity at the boundaries. The one-dimensional case had a specified length, but we will not saddle this infinite well with the same width in all three directions, meaning we will confine the particle to a rectangular prism, not a cube. We will define our coordinate system so that the walls are parallel to the three planes, and (unlike what we chose for the one-dimensional case), we will place the origin at one of the box corners. The lengths of the walls along the \(x\), \(y\), and \(z\) axes we will call \(L_x\), \(L_y\), \(L_z\), respectively.
Figure 4.2.1 Three-Dimensional Infinite Square Well
Mathematically, the potential is written:
\[ V\left(x,y,z\right) = \left\{ \begin{array}{l} 0 & 0<x<L_x\;\;and\;\;0<y<L_y\;\;and\;\;0<z<L_z \\ \infty & elsewhere \end{array} \right. \]
As we saw for the one-dimensional box, we can use a combination of two oppositely-moving plane waves (for each of the three axes) to construct a wave function of definite energy that vanishes at the walls (i.e. sinusoidal functions). The individual solutions to the differential equations in \(x\), \(y\), and \(z\) are the same as before, with two exceptions: Each dimension involves a separate harmonic number \(n\), and as we have chose the origin to be at a wall (rather than centering it within the well), the wave functions are all sines:
\[ X\left(x\right) = A_x\sin\dfrac{n_x\pi \;x}{L_x} \;\;\;\;\; Y\left(y\right) = A_y\sin\dfrac{n_y\pi \;y}{L_y} \;\;\;\;\; Z\left(z\right)= A_z\sin\dfrac{n_z\pi \;z}{L_z},\;\;\;\;\; n_x,\;n_y,\;n_z=1,\;2,\;\dots\]
Before we move on to the energy spectrum, let's construct the spatial full wave function by multiplying the partial wave functions. We also have to deal with normalizing the full wave function. Normalization does not tell us anything about the values of \(A_x\), \(A_y\), and \(A_z\), but their product must equal the normalization constant for the full wave function. The normalization integral is over all three dimensions and integrals over \(x\) will only affect \(X\left(x\right)\), and similarly for \(y\) and \(z\), so the normalization constant for the full wave function turns out to be the same as the product of the normalization constants for the three separate one-dimensional wave functions.
\[ \psi_{n_xn_yn_z}\left(x,y,z\right) = \sqrt{\dfrac{8}{L_xL_yL_z}}\sin\dfrac{n_x\pi \;x}{L_x}\sin\dfrac{n_y\pi \;y}{L_y}\sin\dfrac{n_z\pi \;z}{L_z},\;\;\;\;\; n_x,\;n_y,\;n_z=1,\;2,\;\dots \]
In keeping with our notation of labeling the wave function with the quantum numbers, we have labeled the energy eigenstate wave function accordingly.
Plugging the wave function back into the stationary-state Schrödinger equation, we get the following energy spectrum:
\[E_{n_xn_yn_z} = \left(\dfrac{n_x^2}{L_x^2}+\dfrac{n_y^2}{L_y^2}+\dfrac{n_z^2}{L_z^2}\right)\dfrac{\pi^2\hbar^2}{2m},\;\;\;\;\; n_x,\;n_y,\;n_z=1,\;2,\;\dots\]
One might be tempted to think that the ground state energy of this particle occurs when the \(n\) along the longest dimension is 1, while the others are zero, but It should be emphasized that the minimum value of all three \(n\) values is 1. None of the three modes can provide a zero contribution to the energy.
Another way that the three-dimensional case differs from the one-dimensional case is apparent if we consider the hierarchy of the energy spectrum. Suppose for example, we wanted to draw an energy-level diagram for this spectrum. We know that \(\psi_{111}\) is the ground state, but which quantum state would be the first excited state? The answer depends upon the dimensions of the well. If the \(L_x\) is greater than the other two box dimensions, then the smallest increase in the total energy will come from incrementing \(n_x\) from 1 to 2, and the first excited state would be \(\psi_{211}\). What about the second excited state? Well, now we need even more information. If \(L_x\) is only slightly greater than \(L_y\) dimension (and both are longer than \(L_z\)), then the second excited state would be \(\psi_{221}\). But if \(L_x\) is significantly longer than the other dimensions, then the second excited state would be \(\psi_{311}\). In other words, with three quantum numbers, we have lost the ability (at least for this case) of expressing the energy levels with a single integer.
The 3D Harmonic Oscillator
As our final example of a potential that allows for separation of variables in cartesian coordinate, we consider the three dimensional harmonic oscillator, which has a potential that is a sum of functions purely of \(x\), \(y\), and \(z\). In general, the spring constants are different for each direction, so:
\[ V\left(x,y,z\right) = \frac{1}{2}\kappa_x x^2 + \frac{1}{2}\kappa_y y^2 + \frac{1}{2}\kappa_z z^2 \]
Plugging this into Equation 4.2.2 results an equation with three separated functions again:
\[ \left[\dfrac{1}{X}\dfrac{\partial^2X}{\partial x^2} + \dfrac{\kappa_x m}{\hbar^2} x^2 \right] + \left[\dfrac{1}{Y}\dfrac{\partial^2Y}{\partial y^2} + \dfrac{\kappa_y m}{\hbar^2} y^2 \right] + \left[\dfrac{1}{Z}\dfrac{\partial^2Z}{\partial z^2} + \dfrac{\kappa_z m}{\hbar^2} z^2 \right] = -\dfrac{2mE}{\hbar^2} \]
Following the same procedure as before gives us three separate differential equations. This decoupling maneuver once again leaves us with three wave function pieces, which are multiplied together to get the full wave function. As with the case of the square well, the energy contributions of the partial wave functions are added together to give the total energy of the state:
\[ E_{n_xn_yn_z} = \left(n_x + \frac{1}{2}\right)\hbar\sqrt{\dfrac{\kappa_x}{m}} + \left(n_y + \frac{1}{2}\right)\hbar\sqrt{\dfrac{\kappa_y}{m}} + \left(n_z + \frac{1}{2}\right)\hbar\sqrt{\dfrac{\kappa_z}{m}} \]
While we frequently encounter interactions in the real world that approximate the harmonic oscillator potential, it is rare that the interactions between particles are different along different axes, so of particular interest is the isotropic harmonic oscillator, which involves equal spring constants (\(\kappa\)) in all three directions. In this case, the energy spectrum reduces to:
\[ E_{n_xn_yn_z} = \left(n_x + n_y + n_z + \frac{3}{2}\right)\hbar\omega_c,\;\;\;\;\;\;\;\; \omega_c = \sqrt{\dfrac{\kappa}{m}} \]
Notice that even with the simplification of isotropy, three quantum numbers are required to define the state.
Notice that unlike one-dimensional potentials, in these cases a single quantum number does not define the energy. But there is even more to it than that. Looking at the case of the three-dimensional box again, suppose it has three equal sides: \(L_x=L_y=L_z=L\). In this case, there exist three distinct quantum states that possess the same total energy, namely \(\psi_{211}\), \(\psi_{121}\), and \(\psi_{112}\). These states clearly possess equal energies, and they are distinct because, for example, the states \(\psi_{211}\) and \(\psi_{121}\) yield different uncertainties in the \(x\)-component of the particle's position (\(\psi_{211}\) has two antinodes along the \(x\) axis, while \(\psi_{121}\) has only one). A similar thing occurs (only more dramatically) for the isotropic harmonic oscillator, as any combination of \(n_x\), \(n_y\), and \(n_z\) that gives the same sum will result in the same energy.
When multiple quantum states yield the same energy, they are said to be degenerate, and if there are a total of \(j\) distinct states for the same energy, that energy level is said to be \(j\)-fold degenerate. Typically degeneracy comes about due to obvious symmetries, such as in the case mentioned above. All we need to do is rename our axes, and the states morph into each other, so naturally the energies are the same. But occasionally degeneracies arise unexpectedly, through what can only really be described as a coincidence. These are called accidental degeneracies, the most famous of which arises for the hydrogen atom, as we will see later. An example of one of these for the three-dimensional square well arises for the states \(\psi_{511}\), \(\psi_{151}\), \(\psi_{115}\), and \(\psi_{333}\) – this energy level is 4-fold degenerate, rather than the "expected" 3-fold degeneracy. Naturally the first three states are not unexpectedly degenerate, but the fourth seems to come from left field.
Symmetric quantum systems are common in physics, and degeneracy follows them everywhere. This can cause difficulty in developing theory, as some internal structure can be obscured when different configurations result in the same energy spectrum. The trick then is to introduce an external perturbation that breaks the symmetry, thereby separating otherwise degenerate states. The analogous case for the cubical box would be squeezing or stretching one of the dimensions slightly. This also can work in the other direction – we might see unexpected additional spectral lines that indicate that there is additional structure present that breaks the symmetry we thought existed. So the analogous case for this is an infinite well that we think should be cubical, but provides a spectrum with energy levels landing between those that we compute. |
b09a6183763b267d | The Schrodinger Equation
The Schrödinger equation is the fundamental equation of physics for describing quantum mechanical behavior. It is also often called the Schrödinger wave equation, and is a partial differential equation that describes how the wavefunction of a physical system evolves over time. Viewing quantum mechanical systems as solutions to the Schrödinger equation is sometimes known as the Schrödinger picture, as distinguished from the matrix mechanical viewpoint, sometimes known as the Heisenberg picture.
Leave a Reply
You are commenting using your account. Log Out / Change )
Twitter picture
Facebook photo
Google+ photo
Connecting to %s
%d bloggers like this: |
6e4926ffd3eac0d8 | Wednesday, September 30, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
Beaten with hockey sticks: Yamal tree fraud by Briffa et al.
I will open a discussion thread about this development, too. Steve McIntyre has broken another hockey stick:
Yamal: a divergence problem (click)
... a copy at Climate Audit (click)
Because Climate Audit is overloaded, here's the Google cache.
The finding is very easy to describe. Briffa et al. (Science, published September 2009, see also Briffa et al., Philosophical Transactions 2008) offered another version of a "hockey stick graph", a would-be reconstruction of the temperatures in the last 2000 years that claimed to show a "sudden" warming in the later part of the 20th century, much like the discredited paper by Michael Mann et al.
Papers by Mann, Bradley, and Hughes in 1998 and 1999, included as a symbol of global warming into the previous IPCC report in 2001, indicated constant temperatures before 1900 and a dramatic warming afterwards. However, the papers have been proven wrong.
If you haven't heard about the lethal bug of the Mann methodology yet, the problem of the MBH98, MBH99 papers was that the algorithm preferred proxies - or trees (or their equivalents) - that showed a warming trend in the 20th century, assuming that this condition guaranteed that the trees were sensitive to temperature.
Tuesday, September 29, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
Political racketeering
Special welcome to the Swedish EU presidency.
Two interesting examples of blackmailing in politics emerged today.
Iran vs West (click)
A hardcore Iranian lawmaker said that Iran could quit the nuclear non-proliferation treaty if the pressure from the West continues.
Eurocrats vs Czechia (click)
Mirek Topolánek, the leader of the Czech center-right ODS party, said that he was effectively told by Jose Barroso that all EU countries but Czechia will have a commissioner if President Klaus doesn't become another puppet of the EU bureaucracy and doesn't sign the Treaty of Lisbon. ;-)
Monday, September 28, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
Four degrees Celsius in 50 years?
Last week, Yugratna Srivastava, a 13-year-old Indian girl, was hired by the United Nations to present a poem to the world's leaders and the humanity.
In the tradition of Nazi and Soviet methods of propaganda, a kid was asked to explain that our world is gonna fry unless everyone buys all the ideology and policies that her propagandistic employers wanted her to disseminate.
There apparently exist adults whose skulls are comparably unhinged. The girl wasn't strong enough to convince the world about the looming catastrophe - and they need much stronger "momentum" for the Copenhagen negotiations that should efficiently cripple the world's economy.
2009 physics Nobel prize: speculations
Update: The 2009 physics Nobel prize went to Charles Kuen Kao (1/2) and Willard Boyle (1/4) and George Smith (1/4): see a newer blog article
Next week, Scandinavia will tell us about their choice of Nobel prizes for 2009. The physics Nobel prize will be announced on Tuesday, October 6th, at 11:45 a.m., Swedish time.
Who is going to win the physics award that has preserved its exceptional status because the prize has never been flagrantly misdirected, unlike the peace Nobel prize, so far?
First, let us summarize the winners since October 2004 when this blog was born:
Now, it may be fun to recall some predictions made in the previous years:
Very soon, I will review some older scenarios which may still be possible in 2009. Meanwhile, Thomson Scientific offered their own, new predictions based on their algorithm analyzing the network of citations. They managed to accurately guess the 2007 winners - Fert, Grünberg - although they did so already in 2006 and F+G were not their top choice.
Sunday, September 27, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
First Czecho-Slovak Superstar
See also: Dominika Stará vs Martin Chodúr
See also: Dominika Stará: Je suis Malade
After a couple of Czech (CZ) Pop Idols and Slovak (SK) Pop Idols and one year with the Czech X-Factor, the Czech and Slovak contests were wisely unified.
This guy has only been training the song for 1 hour - during the reduction from 118 to 90. In my opinion, Martin Chodúr's edition of "Supreme" was more convincing, testosterone-loaded than the original version of Robbie Williams.
The moderators are Mr Leoš Mareš (CZ) and Ms Adéla Banášová (SK) and they're doing a superb job. I used to dislike Mareš because he seemed excessively pompous concerning his extraordinarily high income etc - but these negative emotions of mine are gone by now. There are two Czech and two Slovak judges - with all four sex/nation combinations: Mr Palo Habera (SK, younger), Mr Ondřej Hejma (CZ, older), Ms Dara Rollins (SK, blonde), Ms Marta Jandová (CZ, brunette).
Friday, September 25, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
Pope visits the Czech infidels
The leaders of the Czech Republic and the Vatican in their characteristic hats. Note the similarity between the two.
Tomorrow, the Holy Father arrives to Czechia which is probably the most atheist country in the world. The Reference Frame wishes him a lot of good luck and a nice, relaxing stay.
On Monday, we celebrate a national holiday, the St Wenceslaus Day (from the Christmas carol, Good King Wenceslaus), our patron and one of the first dukes (and de facto kings) who was murdered by his brother in the town of Boleslav that the Holy Father will visit.
For 95% of the Czechs, it's just another work-free day, as we will explain.
D-braneworlds strike back
Today, Mirjam Cvetič, James Halverson, and Robert Richter wrote the first hep-th paper (that might normally be a hep-ph one, I think):
Mass hierarchies from MSSM orientifold compactifications
Recall that the main detailed classes of phenomenological scenarios within string theory are:
• weakly coupled heterotic strings on Calabi-Yau three-folds
• its strongly coupled version, Hořava-Witten heterotic M-theory on Calabi-Yau three-folds
• M-theory on singular G2 holonomy manifolds
• F-theory on Calabi-Yau four-folds and its type IIB descriptions
• type IIA braneworlds with D6-branes and orientifolds (and lots of quiver diagrams)
Their subsets are related by various dualities, they have various advantages and disadvantages.
Thursday, September 24, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
Google Chrome Frame for Internet Explorer
Microsoft Internet Explorer users are recommended to install
Google Chrome Frame: download, info,
a plug-in for MSIE 6/7/8 that replaces the Microsoft JavaScript engine by a much faster Chrome JavaScript engine. The Chrome engine also adds support for HTML5, canvas, and other features.
The plug-in is only activated for websites whose webmasters have inserted the following meta-tag to their pages:
<meta content='chrome=1' http-equiv='X-UA-Compatible'/>
But The Reference Frame is among them. As far as my measurements go, it used to take 10 seconds from pressing the "TRF" button to seeing the top of the right sidebar in Internet Explorer. This rather long time makes TRF an excellent benchmark. ;-)
With Google Chrome Frame, the time was reduced to 6 seconds. That's an improvement. But my Google Chrome 4.0 shows the sidebar in 3 seconds, much like the newest official Mozilla Firefox, namely 3.5.3. Chrome is much faster in some respects: for example, its startup is literally immediate.
Poland, Estonia win: indulgences for free
Breaking news: Reuters is finally learning how to write balanced and attractive articles. The article called U.N. climate meeting was propaganda: Czech president is currently the most popular article on the Reuters website, ahead of the sex of Mackenzie Phillips (see the list in the right lower corner of any Reuters article): they switched the places (screenshot). I guess that Drudge Report did help a bit. ;-)
See also Klaus's U.N. speech about the ways (not) to solve the crises.
The Guardian's most popular article is dedicated to the same U.N. climate meeting and is called Obama the Impotent.
EurActiv, Times, and others inform that Poland and Estonia have won: the Court of First Instance ruled that the European Commission didn't have the right to cut the carbon quotas for these two countries because the countries themselves should set the numbers and the commission may only review them. :-)
Tuesday, September 22, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
Israel: optimizing strike on Iran
David Petrla and some Pentagon sources cited in the media have convinced me that Israel is completing its plans to attack Iranian nuclear and military facilities. According to Dmitry Medvedev who is not a spokesman for Israel, Peres is telling the people that Israel has no such plans but Netanyahu clearly thinks different. ;-)
A typical Israeli soldier
Israel knows that Obamaland and many other Western or otherwise powerful countries suck as allies, that the mostly self-sufficient Iran doesn't really care about sanctions (especially not the homeopathic ones), and that the verbal attacks from Iran, combined with its accelerating nuclear efforts, represent a genuine existential threat for their very existence. Iran's freedom to manipulate with dangerous materials ends where the freedom - and life - of others begins. And I agree that they have already crossed the border.
Pictures from the anti-Obama rally in D.C.
This is not a full-fledged article. But Ross Hedvíček of Florida posted pretty cool pictures of the anti-Obama rally in Washington D.C. that took place a week ago or so.
Click the picture above to get to the article ("Comrade Obama has only been caressed in Czechia") to see many more photographs like that. About a million (more pix!) of witty people of all races, ages, and sexes attended the rally but only the protester above has won the Rally TRF Hottie award. Congratulations.
Climate in the U.N.
By the way, there was a climate meeting somewhere in the New York City today. Its purpose was for Prof Václav Klaus to teach his students, other politicians, something about the society, economics, politics, and their interactions with science, taking the global warming hoax as the main example.
But most of them are bad students so they were far too distracted by pornographic thoughts so they didn't learn almost anything. For instance, a little Nicolas has proposed one more intercourse with his friends in November.
The media are pretty much full of their pornographic thoughts.
The Guardian, a British socialist daily, decided that Obama can give a bad, awfully ho-hum, speech, too.
Yes, that's the speech. The ordering of the words is pretty much irrelevant so you don't have to watch the video with the hogwash. Reuters managed to publish some sensible information about the meeting in the article called
Reuters: U.N. climate meeting was propaganda...
The president said: "It was sad and it was frustrating. It's a propagandistic exercise where 13-year-old girls from some far-away country perform a pre-rehearsed poem. It's simply not dignified."
Oh, OK, I meant the Czech president. ;-)
On Thursday, at 8/7 Central, ABC is gonna broadcast FlashForward by Robert Sawyer.
The series will begin at the LHC in CERN. The point of the series is to discuss the fate and the destiny. Everything will be about a strange event.
For "1/alpha" seconds, where "alpha" is the fine-structure constant, every human being will be able to perceive the following 6 months of their lives. ;-)
Monday, September 21, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
Kenya: rainmakers key to consensus on climate change
AFP reports that Kenya's Nganyi rainmakers are being enlisted to mitigate the effects of climate change:
Kenya rainmakers called to the rescue (click)
Alexander Okonda's great-grandfather was also a rainmaker. In the 1910s, he was arrested by the British because they determined that he had been responsible for poor rainfall.
Now, the great-grandson is getting the credit he deserves. As the methods of climatology have been strikingly transformed, he is appreciated as a top scientist. Alexander Okonda blows through a reed into a pot embedded in a tree hollow and containing a secret mixture of sacred water and herbs.
"This contains so much information. It is something I feel from my head right down to my toes," says Alexander, after completing his ritual. The young man is a member of the Nganyi community, a clan of traditional rainmakers that for centuries has made its living disseminating precious forecasts to local farmers.
Nothingness spreading in de Sitter space
Maulik Parikh (now Pune, India) posted the first hep-th preprint today, and I think it is the most interesting one:
Enhanced Instability of de Sitter Space in Einstein-Gauss-Bonnet Gravity (click)
He argues that the Gauss-Bonnet term - the topological Euler density (in 4D) - may look inconsequential perturbatively and it decides about the life and death of de Sitter backgrounds.
Recall that the Lagrangian of the Einstein-Gauss-Bonnet system is
L = 1 / (16.pi.G) [ R + alpha (R*R - 4 R.R + R^2) ].
Besides the Einstein-Hilbert term, you can see the topological term multiplied by the the area, "alpha". Because the pair-creation of black holes involves some topology change, the last term matters and increases the nucleation rate by the factor
Gamma = Gammaorig exp (4 pi alpha / G)
The second enhancing factor becomes huge if the Gauss-Bonnet area "alpha" is much bigger than the Planck area "G". That's expected to be the case even in perturbative string theory where "alpha" is comparable to the squared string scale, or at least Maulik says so. When the enhancement is large, you should care about the original decay rate,
Gammaorig = exp (-pi L2 / 3G)
where L is the curvature radius of the de Sitter space. Without the alpha-enhancement, this rate would be negligible for any de Sitter space that is visibly bigger than the Planck scale.
However, with the alpha-enhancement, the decay rate becomes significant. For an inflating Universe, the Hubble radius, "1/H", has to be greater than "sqrt(12 alpha)", otherwise the instanton creates lots of black holes which are probably unhealthy for the inflationary mechanism. In the example above, this means that the radius must exceed the string scale (with a particular numerical prefactor). This doesn't sound too dramatic a constraint but because the inflation scale is often close to the string scale, it could be a nontrivial constraint.
Of course, it would be even more interesting to discover that there is a new, unexpectedly huge contribution to the Gauss-Bonnet term that makes "alpha" close to the squared neutrino Compton wavelength. If this were the case, one could derive a constraint on the cosmological constant. ;-) Such a huge alpha is probably impossible but it would be fun if there were one.
There could exist similar enhancements and instabilities of this kind - and maybe its higher-dimensional counterparts - that could eliminate many kinds of compactifications with too small radii, too complicated topologies, and so on. Quantum cosmologists should try to study these possibly neglected mechanisms intensely.
By the way, this is related to one point that I dislike about the current approach of the anthropic people. For most features of the Universe, they can't find any strong and accurate enough anthropic constraint. But if they can "explain" something using this anthropic reasoning, they're satisfied. This is a fundamentally unscientific thinking because one should always try to find "all" conceivable constraints - and the "other solutions" (such as the black hole creation) could actually be more important, more stringent, more predictive, and more true than the ones that the anthropic people "guess" by chance.
ISS with NS5-branes
By the way, the second hep-th paper is also interesting and it is also about the vacuum selection. Kutasov, Lunin, MrOrist, Royston study the landscape of vacua obtained by stretched D4-branes (and other D-branes) between NS5-branes. They end up with some Intriligator-Seiberg-Shih-like SUSY breaking setup and argue that the early cosmology pushes the Universe towards a particular SUSY-breaking ground local minimum.
Sunday, September 20, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
The Age of Stupid
The filmmakers from the Horrifying Anthropogenic Global Warming Activist Socialist Hysteria (HAGWASH for short) are trying to create a new hit, The Age of Stupid.
The world is gonna burn and the mankind dies as soon as in 2055: see the realistic countdown before the final solution, extinction of life in 2055. An old guy, Pete Postlethwaite who is the last person alive ;-), looks to his media collections from 2008 or so and decides that everyone was stupid because he didn't save the world.
Check that all famous buildings are gonna be destroyed by a few tenths of a degree of warming.
But the people who are ready to consider this piece of dirty unscientific shrill propaganda as a serious documentary - which is how it's being marketed at many places of the world - are not just stupid. They deserve a far stronger term.
The wiser ones may consider to read the NIPCC (Non-governmental International Panel for Climate Change) report which is a truly comprehensible, nonsense-free, and comprehensive 880-page-long summary of the state-of-the-art research in climate science. Click the to initiate the purchase.
Hat tip: Alexander Ač
Saturday, September 19, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
The Da Vinci Code
I have finally watched The Da Vinci Code, based on the 2003 bestselling book by Dan Brown. And it was pretty impressive.
Spoilers follow.
If you don't know, in this novel, some mysterious murders turn out to be results of a big battle between two social or religious groups. One of them is supposed to protect the descendants of Jesus Christ and his wife, Mary Magdalene, who could prove that Jesus was a human being. The other one wants to protect the big dirty secret of the Christian Churches, namely Jesus's humanity.
Klaus: Is there a common European idea?
I am thankful for the invitation to these inspiring "Passau Dialogues". And I happily add that it is an honor to be given the opportunity to lead a discussion with such an important personality of contemporary Europe as - beyond any doubts - cardinal Schönborn surely is.
We will certainly discuss neither the details of the church orthodoxy - in which I wouldn't be an appropriate partner - nor the ever returning questions about the relationships between the state and the church. Also, I will avoid temptations to offer alternative hypotheses about the origin of the financial and economic crisis or similar topics of my discipline, the economic science.
Friday, September 18, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
China's top climatologist: 2 °C probably no problem
The Guardian informs about the opinions of the top climatologist of a group of 1.35 billion people that calls itself by a funny name, People's Republic of China.
Mr Xiao Ziniu says that it has not been determined whether the warming by 2 °C - which is often being talked to as the "cutoff" that is forbidden before 2050 (it won't happen, anyway!) - is dangerous. China has experienced warmer periods than today and each change of the temperature brings some advantages and some disadvantages.
TBBT & Sheldon Cooper: Xmas scene runs for Emmy
After having won the corresponding TCA award in August 2009, Jim Parsons (Dr Sheldon Cooper of The Big Bang Theory) has also been nominated for the "best actor in a comedy series" category of the Emmy awards. He's excellent, flawless, and - let me admit - in many ways better than the original. ;-)
This Christmas or Saturnalian scene (from 2x11, The Bath Item Gift Hypothesis) remains my most favorite one. It's just touching.
As an Emmy n00b, Parsons won't probably follow quite a straightforward path to his Emmy. And maybe he will. Kind of wisely, however, the scene above has been chosen as his bath item gift to the Emmy voters and as the trademark example of his unusual skills as an actor.
Thursday, September 17, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
ESA: Planck sends first images
If you remember, ESA launched Planck in May 2009. Four months later, we have the first images that should eventually (after six months) supersede the well-known WMAP images. BBC and others report.
Click to zoom in. The temperature variations measured by Planck in nine frequency ranges are depicted inside the strip, by the usual WMAP-like mottled colors.
Planck rotates roughly once a minute.
Czech, Polish missile defense system shelved
At 00:21 Prague local time at night, Barack Obama called Czech PM Mr Jan Fischer in the pajamas to his cell phone. ;-)
He told him that the American plans to build the radar in the Czech Brdy hills (and probably also the interceptors in Poland, although this second part remains unconfirmed) will soon be either scrapped or at least delayed to 2015 or later.
Obama also called Polish PM Donald Tusk but the latter didn't know how to use the telephone so there was no conversation.
Wall Street Journal, AP, The Telegraph, Google News (click)
It's officially argued that the Iranian missile abilities were just found to be less advanced than previously thought. Clearly, it's not the main reason.
L.M. with two Greenpeace protesters who lived on the trees right above the planned radar location (Google Maps) and who eat environmentally friendly roots, insect, excrements, and dirt.
Whether or not the system would be genuinely useful, it's clear that these canceled plans will diminish and cool the ties between the New Europe and the United States. On the other hand, Russia, the communist farmers who live around the radar site, and the Russia-funded protest NGOs may get less nervous.
CERN wants a linear collider
The LHC is not yet operating - it will begin in mid November, with reduced-energy collisions added a few weeks later - but the CERN director, Rolf-Dieter Heuer, already wants to build a new linear collider at CERN.
In his modest office with a socialist-style furniture, he also explains the difficult cleaning procedures and even more difficult preemptive policies. Heuer is optimistic about their control over the LHC which seems much smoother than LEP (the previous Lot of Extra Problems collider) even though LEP was simpler.
In a few years, the LHC will have years of experience of running at 14 TeV, he says, plus important discoveries, he hopes. Also, the European-American symmetry has been spontaneously broken and people suddenly come to CERN. ;-) Heuer thinks that science needs global, continental, as well as national projects to preserve the expertise of the people.
CERN has the capacity to host the International Linear Collider (ILC) or the 3-TeV, 48-km Compact Linear Collider (CLIC; and click the word haha): see the picture. But competition is always welcome, Heuer says - as long as the symmetry is broken and others have no chance. ;-)
Wednesday, September 16, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
Kyoto II: Obama vs Eurocrats
An entertaining split between Europe and America has emerged concerning the question how the carbon emissions reductions should be achieved in individual nations.
Obama and Barroso in Prague, April 2009. Things may have been different then.
As The Telegraph, The Guardian, and everyone else reports, Europe and America differ in their opinion how the internal rules to reduce the CO2 production should be set.
The European politicians think that Kyoto I has been such an amazing success ;-) that it should be repeated and its successes should be amplified. Among other things, it means that all nations should adopt the same internal mechanisms to punish the CO2 emissions. The U.S. economy should be controlled by the Eurocrats in Brussels in the same way as any other decent EU country and Barack Obama should remain what he is appreciated for, namely a puppet of the global political correctness headquarters that should stay in Brussels.
On the other hand, Barack Obama himself dared to disagree. Kyoto I hasn't been a sufficiently huge disaster so the U.S. president wants to engineer an even better scheme. As the first post-Hoover protectionist president of a country that rejected Kyoto I and is going to reject Kyoto II as long as it is isomorphic (and gives a free pass to the poorer emerging markets), he thinks that every country should be allowed to decide about its own methods to achieve the targets and the carbon flows in America should remain uncontrollable by the EU and the U.N. That's quite a heresy for the EU, comrade Obama! ;-)
Even Steven Chu has warned that deep CO2 reductions cannot be achieved politically in the U.S. Why doesn't he follow the example of the tall and strong Napoleon in France who defeated 74% of the French citizens and imposed a carbon tax upon them? ;-) Sarkozy also wants to start a world trade war by a new CO2 border tax. Swedish EU presidency also urges the U.S. Senate to behave; if they won't, the U.S. Senators will be spanked just like any bad EU kids. ;-)
It's not hard to understand Europe's newly gained self-confidence with respect to America. The Made-In-America downturn has allowed Europe to surpass North America as the wealthiest region of the world. And the future fate of the U.S. dollar (now at 1.475 per euro, or 17 crowns per dollar) - whose reserve status is being questioned by all members of BRIC as well as others (everyone can see that the U.S. may suffer from the same kind of an irresponsible socialist government as everyone else) - may turn out to have something to do with this picture.
The declared purpose of the December 2009 negotiations in Copenhagen that will hopefully fail completely is to save the Earth if not the multiverse. The UAH AMSU data see the average annual and global brightness temperature of the Earth to be close to minus 15.5 °C. Ban Ki-Moon and similar stellar scientists have calculated that if the temperature exceeds f***ing frying minus 13.5 °C which is by 2 °C higher, all of us are going to evaporate or transform into plasma and the Universe may decay into a different state, too. And I don't have to explain you the staggering statistical implications for the whole multiverse. ;-)
During the year, the brightness temperature oscillates approximately between -17 °C in January and -14 °C in July - because the variations of the landmass, which is mostly on the Northern Hemisphere, are more pronounced than the variations of the oceanic temperatures. The recent, 30-year trends indicate that the temperature is increasing roughly by 1 °C per century, so the catastrophic level when the temperature will oscillate between -15 °C and -12 °C could occur around the year 2200 or so - whether or not we will continue to use fossil fuels.
If you have ever experienced how much brutally hotter -12 °C is relatively to -14 °C, you must agree with all these guys that we're all doomed already next year - because we can already predict that the year 2200 will come - unless Obama and his compatriots will join the EU as obedient members. :-)
Myths about the minimal length
Many people interested in physics keep on believing all kinds of evidently incorrect mystifications related to the notion of a "minimal length" and its logical relationships with the Lorentz invariance. Let's look at them.
Myth: The breakdown of the usual geometric intuition near the Planck scale - sometimes nicknamed the "minimum length" - implies that the length, area, and other geometric observables have to possess a discrete spectrum.
Reality: This implication is incorrect. String theory is a clear counterexample: distances shorter than the Planck scale (and, perturbatively, even the string scale) cannot be probed because there exist no probes that could distinguish them. Consequently, the scattering amplitudes become very soft near the Planck scale and the divergences disappear.
Blog2Print: print blogs as books
There are many reasons why people may prefer the good old paper over the internet pages, especially when it comes to long essays.
Click to zoom in.
Tuesday, September 15, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
Smartkit: On the edge game
Click the screenshot for the game. Jump on each white square once before you end up with the red square.
Monday, September 14, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
Murray Gell-Mann: 80th birthday and interview
On Tuesday, Murray Gell-Mann celebrates his 80th birthday. Big congratulations!
This article will summarize some old achievements of the great physicist but also discuss some of his recent opinions about string theory.
Murray Gell-Mann was born on September 15th, 1929 in Lower East Side of New York to a family of Western Ukrainian Jewish immigrants. When he was fifteen, he joined Yale. ;-) See some pictures from his early life.
In the 1950s, when he was in his 20s, he studied cosmic rays and discovered/invented the strangeness in order to make sense out of the isospin, other quantum numbers, and their relationships (e.g. using the key Gell-Mann-Nishijima formula).
I wrote his biography one year ago, in Oskar Klein and Murray Gell-Mann: birthdays. So I won't write everything again. Let me just say that Murray Gell-Mann was the most important one among the first pioneers who realized that there were quarks inside hadrons which is what earned him the 1969 physics Nobel prize. Note that all these things, including the award, had been completed years before the discovery of QCD.
Clifford Johnson: LASER
A pretty good, non-technical explanation how LASERs work. Well, the reason why the photons end up going in the same direction is slightly underexplained but the very idea of a particle physics choreography is neat.
Via Asymptotia.
Global warming affects beer, eggs, corn, pork
Rafa has pointed out that Nude Socialist as well as lots of other media have reported that global warming makes beer suck: some Czech researchers think that the concentration of (bitter) alpha acids in hops was recently dropping by a whopping 0.06 percent per year (...) which they attribute to global warming (...).
That's a true catastrophe (...) which finally proves that we are all doomed. Click the sentence below to read more.
Saturday, September 12, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
Schrödinger's virus and decoherence
The physics arXiv blog, Nature, Ethiopia, Softpedia, and many people on the Facebook were thrilled by a new preprint about the preparation of Schrödinger's virus, a small version of Schrödinger's cat.
The preprint is called
Towards quantum superposition of living organisms (click)
and it was written by Oriol Romero-Isart, Mathieu L. Juan, Romain Quidant, and J. Ignacio Cirac. They wrote down some basic stuff about the theory and a pretty clear recipe how to cool down the virus and how to manipulate with it (imagine a discussion of the usual "atomic physics" devices with microcavities, lasers, ground states, and excited states of a virus, and a purely technical selection of the most appropriate virus species).
It is easy to understand the excitement of many people. The picture is pretty and the idea is captivating. People often think that the living objects should be different than the "dull" objects studied by physics. People often think that living objects - and viruses may or may not be included in this category - shouldn't ever be described by superpositions of well-known "privileged" wave functions. Except that they can be and it is sometimes necessary. Quantum mechanics can be baffling but it's true.
Friday, September 11, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
CO2 makes Earth greenest in decades
In June 2009, Anthony Watts reposted an article by Lawrence Solomon that pointed out that the Earth is greener than it has been in decades if not centuries.
See also NASA's animations of this Earth (the map of its bio-product), for example the low-resolution one.
In less than 20 years, the "gross primary production" (GPP) quantifying the daily output of the biosphere jumped more than 6%. About 25% of the landmass saw significant increases while only 7% showed significant declines.
Note that the CO2 concentration grows by 1.8 ppm a year, which is about 0.5% a year. It adds up to approximately 10% per 20 years. In other words, the relative increase of the GPP is more than one half of the relative increase of the CO2 concentration. The plants also need solar radiation and other things that haven't increased (or at least not that much) which is why the previous sentence says "one half" and not "the same as".
Because the CO2 concentration in 2100 (around 560 ppm) may be expected to be 50% higher than today (around 385 ppm), it is therefore reasonable to expect that the GPP will be more than 25% higher than it is today. Even by a simple proportionality law, assuming no improvements in the quality, transportation, and efficiency for a whole century, the GPP in 2100 should be able to feed 1.25 * 6.8 = 8.5 billion people, besides other animals.
Of course, in reality, there will be lots of other improvements, so I find it obvious that the Earth will be able to support at least 20 billion people in 2100 if needed. On the other hand, I think that the population will be much smaller than 20 billion, and perhaps closer to those 8.5 billion mentioned previously.
Back to the present: oxygen
Now, in September 2009, Anthony Watts mentions a related piece of work that some Danish researchers just published in Nature:
Copenhagen press release
Paper in Nature
The authors have studied chromium (not chrome!) isotopes in iron-rich stones to determine some details about the oxidification of the oceans and the atmosphere that occurred 2+ billion years ago.
In two different contexts, they are forced to conclude that an increased concentration of oxygen in the oceans and the atmosphere led to cooling.
The authors say a couple of things about the ice ages that are manifestly incorrect. They say that the oxygen concentration could have been the key driver behind the temperature swings during the glaciation cycles: a higher amount of oxygen allowed the organisms to consume more CO2 and other greenhouse gases that reduced the temperature by a weaker greenhouse effect.
That's clearly incompatible with the fact that the temperature was changing roughly 800 years before the concentration of the greenhouse gases. The temperature variations couldn't have been an effect caused by the greenhouse gases, not even if you try to add oxygen in the sequence of all the correlated phenomena.
However, it's plausible that the oxygen levels influenced the temperature more directly (which consequently influenced the concentrations of trace gases, via outgassing).
A simple additional comment I can make is that the higher concentrations of oxygen may be increasing the albedo (reflectivity) of the oceans and the landmass by adding life forms which may be optically brighter than the dead soil and oceans and/or the life forms that don't need oxygen (or because of another inequality in the energy balance of photosynthesis and/or breathing).
Even if that is the case, it remains largely unknown whether the oxygen variations in the glaciation periods were sufficient to drive the temperatures (I guess that they're not) and even if they were sufficient, it would remain to be seen what was their cause.
Thursday, September 10, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
Abiogenic birth of oil
At least a large portion of petroleum is believed to originate from biological processes. However, an article in Nature,
Kolesnikov, Kutcherov, Goncharov: Methane-derived hydrocarbons produced under upper-mantle conditions
uses spectroscopic methods applied to laser-heated diamond to argue that at temperatures around 750-1250 °C and pressures around 20,000 atmospheres, methane transforms into ethane or propane or butane, combined with graphite and hydrogen. Under the same conditions, ethane decomposes into methane: the transition is reversible.
It should also mean that it is easier to find oil, as The Swedish Royal Institute of Technology puts it.
New oil reserves
Such a statement is not too shocking: two days ago, 1-2 billion new barrels of light oil were announced by BG in Brazil, increasing the world's proven reserves by 0.1-0.2%. One week ago, BP found 4-6 billion new barrels in the Gulf of Mexico, previously thought to be "finished".
Review of the membrane minirevolution and other hep-th papers
Today, there are twelve new papers primarily labeled as hep-th papers. The first one, and one that may attract the highest number of readers, is a review of the membrane minirevolution by Klebanov and Torri. However, I will mention the remaining eleven preprints, too.
Membrane uprising: a review
The membrane minirevolution was discussed on this blog as a minirevolution long before most people noticed that there was a minirevolution going on.
Important papers by Bagger + Lambert and by Gustavsson (BLG) introduced a new, unusual Chern-Simons-like theory with 16 supercharges in 2+1 dimensions. It was argued that it had to describe two coincident M2-branes. It used to be thought that the CFT theories dual to M-theory on "AdS4 x S7/G" had no Lagrangian description except that BLG found one.
Upgraded: Hubble Space Telescope
Carina Nebula in the visible (top) and infrared (bottom) perspective. That's where stars are being born.
The Hubble Space Telescope is alive, well, and upgraded. Click the picture above to see 7 pretty new pictures (via BBC) or see Google News or Blog Search.
The book advertised on the left side is just one among many other books with pretty colorful photographs that the Hubble Space Telescope has produced during those years. Let me recall that the gadget should eventually be replaced by the James Webb Telescope.
Wednesday, September 09, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
ASU: Origins of the Universe
On April 6th, 2009, six Nobel prize winners discussed the origins of the Universe in Arizona. If you have 64 extra minutes, and/or if you liked a similar ASU discussion whether our Universe was unique, here I bring you a new one.
Baruch Blumberg got a medicine Nobel prize for a virus and he is an astrobiologist. Sheldon Glashow, David Gross, and Frank Wilczek are particle physicists who need no introduction. Wally Gilbert is a biochemist, Chemistry Nobel prize winner in 1980, founder of Biogen etc., capitalist, chairman of the Harvard Society of Fellows, and a photographic artist.
Technical: Click the mail logo below to initiate the process to subscribe to daily e-mail updates with my texts on this blog which are sent every day at 5:15 am Prague Time.
Frank Wilczek and Sheldon Glashow have a small fight about supersymmetry around 26:00. Wilczek explained that "axions" were named after a detergent whose name Wilczek liked so much that he waited for an opportunity to name a particle after it. Glashow reveals that WIMP stands for "Women in Maths and Physics at Harvard" which may be an actual secret organization. :-)
9:09:09 09/09/09
This is not a real posting. Instead, it is just a placeholder posted on 09/09/09 at 09:09:09. Sorry for that! The comment thread can be used for any discussions. ;-)
By the way, the numbers could lead you to ask whether 0.9999... is equal to 1.0000...
Well, you may define your numbers in any way you want. But if want these particular, possibly infinite sequences of decimal digits to represent a number system (namely the set of real numbers) that satisfies (x/3)*3=1, then you're forced to accept that 0.9999... must be identified with 1.0000.... simply because 1/3=0.3333... and 0.3333...*3 = 0.9999... ;-)
Tuesday, September 08, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
Hideki Yukawa: an anniversary
Today, several mathematicians and physicists would celebrate their birthday or deathday. (Some cosmologists are still confused why people don't celebrate their deathdays too often: such an asymmetry shamefully breaks the politically correct equivalence between the different arrows of time! Well, it indeed does: the breaking comes from the so-called "logical arrow of time".)
Marin Mersenne was born in 1588, Joseph Liouville died in 1882, Hermann von Helmholtz died in 1894. But let us look at this guy.
Hideki Yukawa was born in Tokyo on January 23th, 1907 and died in Kyoto on September 8th, 1981. Just like the death is the time reversal of the birth, Kyo-To is the time reversal of To-Kyo, so it makes sense in this case.
When he was 26, he was hired as an assistant professor in Osaka which was a great choice because two years later, in 1935, he published his theory of mesons. The pion was observed in 1947 and Yukawa received his Nobel prize in 1949: that was the first Japanese Nobel prize. He also predicted the K-capture, i.e. the absorption of a low-lying, "n=0" electron by the nucleus of a complicated atom.
Sunday, September 06, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
Schellnhuber: West has exceeded quotas
In his previous life, Hans Joachim Schellnhuber used to be a fairly good theoretical physicist. For example, he would solve the Schrödinger equation with an almost periodic potential in 1983. He has spent a year or so as a postdoc at KITP in Santa Barbara (1981-82).
But the times have changed. For a couple of years, he has been the director of the Potsdam Institute for Climate Impact Research and the main German government's climate protection adviser. What he has just said for Spiegel, in
Industrialized nations are facing CO2 insolvency (click),
is just breathtaking and it helps me to understand how crazy political movements such as the Nazis or communists could have so easily taken over a nation that is as sensible as Germany. A few rotten steps in the hierarchy is enough for a loon to get to the very top. He is proposing the creation of a CO2 budget for every person on the planet, regardless whether they live in Berlin or Beijing. Let us allow him to speak:
Saturday, September 05, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
Mojib Latif warns IPCC of cooling
Nude Socialist informs that Mojib Latif, a member of the IPCC, has warned his fellow IPCC members that we could see 10-20 years of cooling that will make people question the global warming orthodoxy.
Highly trustworthy sources of mine describe Latif as one of the "better ocean modelers". He used to say that the models were perfect but when someone told him that perfect models meant that no extra funding for modelers was necessary, he "developed a deeper appreciation for the model shortcomings." ;-)
So he appreciates that the ocean cycles and others may drive the climate in a different direction than the greenhouse effect for a decade or two. "Short-term" predictions are unreliable, he admits. But it took me quite some time to understand the atmosphere of expectations among those people.
At the beginning, I thought that Latif was just another quasi-religious guy who says that people should be afraid of global warming regardless of the observations and their consistency with the models. Later, I realized that I was probably right but I also realized that Latif was a sort of hero at the same moment.
It is actually a heresy among the IPCC members to even think about the possibility that 10-20 years in the future won't see any discernible global warming - despite the fact that this is precisely what has happened in the previous 10 years (and even 15 years, when you insist on statistical significance).
Friday, September 04, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
Magnetic monopoles seen in CM physics
Science Magazine has published a paper by 14 British and German authors,
Dirac strings and magnetic monopoles in spin ice Dy2Ti2O7 (click),
who claim to have seen, via diffuse neutron scattering, emergent magnetic monopoles in a spin ice on the highly frustrated pyrochlore lattice.
These magnetic monopoles appear at the ends of "observable Dirac strings". This is way too bizarre a terminology, to say the least, because a basic defining property of the Dirac strings, as realized by Paul Dirac, is that they must be unobservable! ;-) OK, fine, they mean some magnetic flux tubes that actually don't respect the Dirac flux quantization rule.
See also
Nature (popular),
Physics World, PhysOrg, Science Daily (click).
Let me say a few words about the Dirac strings.
If you imagine a magnetic monopole of charge Q, i.e. an isolated North (or South) pole of a magnet (that is normally coming in the dipole form - with both poles - only), the magnetic field around is radial and it goes like "Q / R^2". Remember the letter "Q". The vector function "(X,Y,Z)/R^3" in three dimensions has the feature that its divergence equals zero. Well, not quite: it is a multiple of a delta-function.
Is our Universe unique, and how can we find out?
If you have spare 45 minutes, here's a fun panel discussion from April 3rd, 2009, taken during the Origins Symposium at Arizona State University. If you click the O.S. link, you may find other panels with Brian Greene, Lawrence Krauss, Steve Pinker, and many others.
Thursday, September 03, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
Japanese voters may have commited economic harakiri
If you haven't noticed, Taro Aso of the center-right LDP (Liberal Democratic Party of Japan), the most recent prime minister of Japan, was politically killed by the recent polls.
He was a kind of character - and an openly pro-market, pro-separation-of-classes guy. That was too much of a good thing for the low-profile, emotionally conservative electorate in the world's #2 economy which is only the #13 source of the TRF visitors. They ended the 50 years of the government led by LDP.
Kimi ga Yo, or "May Your Reign Last Forever", turned out to be too optimistic anthem lyrics for LDP of Japan.
What is it going to mean for Japan? The winner is the left-wing DPJ (Democratic Party of Japan). Yukio Hatoyama is nicknamed "The ET" or "The Alien" because he looks like one. Moreover, his wife had a trip aboard a UFO space shuttle to Venus, a beautiful planet governed by the little green party (which allowed a 400 °C of CO2 greenhouse effect: the little green comrades picked Venus as the destination because they apparently don't have good mental asylums in Japan).
Yukio Hatoyama comes from the "Japanese Kennedy family" and is going to become the next prime minister of Japan in two weeks.
Their program includes a schedule to screw the Japanese relationships with the United States and a sophisticated strategy to harass the Japanese corporations. The ET has been attacking the existing Japanese market economy which he calls the "unrestrained market fundamentalism and financial capitalism that are void of morals" for quite some time.
He also wants to "put the interests of people before those of corporate Japan", a formulation that will be familiar to those who remember the communist coup d'états in the former socialist Europe, apparently not noticing that the whole Japanese post-war miracle was about the freedom of the corporations to stay ahead of the average citizen and to drag him or her to the future, which has always been in his or her best interest.
Will the Japanese workers be motivated enough by their corporations to work hard enough to afford Beethoven's fifth breakfast? And will Honda's Rube Goldberg machine satisfy the CO2 limits described in the next paragraph?
Moreover, he wants to reduce Japan's CO2 production by 25% by 2020 which approximately translates into a minus 2% annual GDP growth rate for every year in the following decade. It shouldn't shock you that the Japanese companies are concerned, to put it very mildly.
We will see whether the ET is able to transform Japan ($34,000 GDP per capita) into another Vietnam ($2,100 GDP per capita) or North Korea ($1,700 GDP per capita), much like many of his soulmates have repeatedly done in other countries. The Vietnamese people are doing fine in the Czech Republic, we kind of like them, and we're obviously ready to absorb thousands of Japanese emigrants, too. ;-)
Wednesday, September 02, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
Trillions to be wasted for CO2 madness a year
The most experienced readers of The Reference Frame remember a Kyoto counter that used to be embedded in the sidebar. It was created by Steve Milloy of and it was counting the dollars wasted for the Kyoto protocol, assuming that the annual cost of the carbon regulation was USD 150 billion.
He was - and I was - criticized from all sides of the alarmist movement. Let us omit the most vitriolic stuff and look in the comment section of DeltoidEli Rabett (whose real identity is known to us) wrote in 2005:
The world economy is about 20 trillion per year. So even at the junk science's rather exaggerated Kyoto cost of 150 billion per year that is 0.75 percent. Well within the noise using an upper limit for the cost and a lower limit (if any) for the benefit. That, my friends is a good deal. We should grab it.
Well, the figures of these people have always been strange, even when it comes to numbers that every person with a basic interest in the world's economy should know. The world's GDP was USD 55.5 trillion in 2005, not USD 20 trillion. More importantly, the costs - USD 150 billion a year - were surely not "exaggerated".
Washington Post hails Obama as a climate skeptic
Marc Morano has pointed out an interesting article in the Washington Post
Obama Needs to Give a Climate Speech - ASAP
in which Marc Morano and Barack Obama are credited with the gradual fall of the climate hysteria or, if you want to use the original wording, with the "growing defection of experts from the scientific consensus view". ;-) You might think: What a strange pair of bedfellows. But is it really so strange?
Of course, the author, Andrew Freedman, thinks that Barack Obama is obliged to give a fiery alarmist speech to please the movement of the little green men like Freedman himself. Well, I am not 100% sure whether Freedman is the U.S. Überpresident who can control the U.S. President. ;-)
After their private conversations, President Klaus was pleasantly surprised by Obama's charm and energy. Climate realist Klaus noted that Obama has complained about his aids' and his environment's having no sense of economic reality when it comes to policies focusing on CO2. It sounded like the music from the heaven to Klaus's ears, he said.
I think that Freedman is right. Barack Obama has given a smaller space to the climate change in his speeches than George W. Bush did in the same stage of his presidency because Barack Obama is actually a climate crypto-realist. He is just surrounded by hordes of wrong, fearmongering people - and he has become a symbol of all their wrong plans. But at the very depth of his soul, he doesn't think that it's a good idea to regulate carbon. Am I wrong?
Tuesday, September 01, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
An unexpected constitutional crisis in Czechia
I would bet that the situation will be clarified pretty soon but the news from the Constitutional Court of the Czech Republic whose headquarters are located in the town of Brno, Moravia sounds pretty shocking.
All the big and not-so-big parties have begun the campaign for the early elections on October 9th-10th, 2009. Except that the Constitutional Court has just decided that early elections and all the laws that allow them - and that shorten the mandate of the current Parliament - are unconstitutional, despite the fact that the bill about the early elections has been adopted as a constitutional bill.
What happened?
Mr Miloš Melčák was elected as a deputy for the social democratic party in 2006 except that much like a dozen of similar deputies in recent years, he has "betrayed" the bulk of his party by allowing the center-right government to exist. Obviously, he was kicked out of the social democratic party. The "traitors" are being punished in a straightforward way: the parties won't include them on their list so they will lose their job and feeding troughs right after the following elections.
Of course, Mr Miloš Melčák decided that any new elections that would remove him from the Parliament are bad, so they must be unconstitutional. He sent a complaint to the Constitutional Court. In a stunning development, the court has ruled that Mr Melčák is right today. Congratulations. :-)
We are learning that according to the basic charter of the human rights, Mr Melčák and others who are at risk enjoy the right for an "uninterrupted execution of a public appointment". They can't be removed by anyone, the court claims! ;-) The communist party has used a similarly "uninterrupted" definition of democracy for four decades.
The court believes that the early elections would be an example of an "unacceptable change of the critical attributes of a democratic rule of law" - wow - and it's such an important stuff for the court that the court - except for two "dissenters" - thinks that the early elections can't take place before the court publishes its final verdict about the complaint! ;-) So the elections have been postponed indefinitely.
Now, this is obviously a strong stuff.
On one hand, it's good that the constitutional court is trying to verify things, including the decisions that no one in the Parliament dares to doubt. On the other hand, it's kind of crazy that it considers the early elections a "brutal violation of the basic attributes of democracy" and that it claims to have the right to judge which constitutional bill is more important than the other ones.
Even if there were an inconsistency between the basic charter of human rights and freedoms on one side and the bill that declares the early elections, both of them are constitutional bills and the constitutional court would have to operate within this possibly perceived inconsistency.
I think that it's clear that the Parliament has the "moral" right to dissolve itself, via the expected steps involving the President, and the early elections are the obvious democratic solution (or an attempt for a solution) of the otherwise "unsolvable" situation. The interpretation of the "uninterrupted execution of a public appointment" is bizarre, speculative, reminiscent of the undemocratic regimes, and secondary. But the court is making this strangely interpreted right more important than the right of the citizens - and the bulk of their representatives - to democratically choose a new Parliament which is clearly more important according to basic common-sense understanding of democracy.
It's not clear how they will solve it. The court may try to delay the elections indefinitely - or not. Clearly, the lawmakers should search for a very speedy way to reshuffle the laws so that the complaint will be mute. I am no lawyer but I guess it must be possible to revoke all the laws that were claimed to lead to inconsistencies, cancel or update some paragraphs in the charter that lead to similar inconsistencies, and accept a new bill about the early elections that will be consistent but effectively equivalent to the current one.
Also, I think that the constitution is imperfectly designed if it doesn't allow early elections as a standard procedure. At any rate, the early elections have been considered legitimate for quite some time - and even without a canonical wording in the constitutional "core", we've had some early elections in the past - so the sudden realization that they're unconstitutional is strange.
World War II began 70 years ago
It's been 70 years since Poland was invaded by Germany which ignited the most brutal global conflict that the world has seen as of 2009.
One day earlier, on August 31st, Germany staged an attack of would-be Polish troops against a radio station in Gleiwitz, in order to create a "justification" for the attack against Poland.
Poland with its underdeveloped and relatively weak army had no real chance to win. It was surrounded by bastards on the West and on the East. The Ribbentrop-Molotov Pact (which Putin considers immoral) guaranteed that the Soviet Union would not protect Poland. In fact, it occupied the Baltic states and picked a piece of Poland, too. |
3a68a6359974cb0d | Take the 2-minute tour ×
I read about Fractional Quantum Mechanics and it seemed interesting. But are there any justifications for this concept, such as some connection to reality, or other physical motivations, apart from the pure mathematical insight?
If there are none, why did anyone even bother to invent it?
share|improve this question
Interesting question. Looking here it seems like you can derive generalizations of the usual formulas which look the same but with the Levy parameter in them (like the "fractional Bohr atom"), you get the usual answer by putting the parameter to 2. But, as you say....why? – twistor59 Nov 26 '12 at 12:37
Related: physics.stackexchange.com/q/4005/2451 and links therein. – Qmechanic Nov 27 '12 at 0:12
@Qmechanic interesting. Thanks for the link – namehere Nov 28 '12 at 16:33
1 Answer 1
It seems the goal here is to be able to explain all kind of phenomena considering complex situations, in which nonlinearity could be infeasible to handle as it happens in non-quantic systems. According to this reference, the fractional Schrödinger equation
$$i\hbar\dfrac{\partial\Psi(\vec{x},t)}{\partial t}=-[D_{\alpha}(\hbar\nabla)^{\alpha}+V(\vec{x},t)]\Psi(\vec{x},t)$$
where $(\hbar\nabla)^{\alpha}$ is the quantum Riesz fractional derivative
$$(-\hbar ^2\Delta )^{\alpha /2}\Psi (\vec{x},t)=\frac 1{(2\pi \hbar )^3}\int d^3pe^{i\frac{\vec{p}\cdot\vec{x}}\hbar }|\mathbf{p}|^\alpha \varphi ( \vec{p},t)$$
still corresponds/represents quantic systems. For instance, Laskin shows that uncertainty (fractal) it does exist, because
$$\langle|\Delta x|^\mu\rangle^{1/{\mu}}\cdot\langle|\Delta p|^\mu\rangle^{1/{\mu}}>\dfrac{\hbar}{(2\alpha)^{1/{\mu}}}$$
for $\mu<\alpha$ and $1<\alpha\leq 2$.
share|improve this answer
quantic? You do mean quantum, right? I can see this equations, but I find no relating of physics with these mathematics up there. I found Qmechanic's link more enlightening. Check it out, its not bad. – namehere Dec 7 '12 at 11:35
Your Answer
|
58e99af11a3d32d0 | Physics Is For Eternal Five-Year-Olds
Yesterday’s post about differences between intro physics and chemistry sparked an interesting discussion in comments that I didn’t have time to participate in. Sigh. Anyway, a question that came up in there was why we have physicists teach intro physics courses that are primarily designed to serve other departments.
It’s a good question, and in my more cynical moments, I sort of suspect it’s because engineering faculty are canny enough to outsource the weeding-out of the students who can’t hack it in engineering. But I think there are good reasons, particularly at a liberal arts school like Union, to have intro physics taught by physicists, because there’s a fundamental difference between the way physicists approach things and the way other scientists and engineers approach them. As I said not that long ago, physics is about rules, not facts. The goal of physics is to come up with the simplest possible universal rules to describe the behavior of the universe, through breaking everything down to the most basic case imaginable. Once you’ve got those rules, then you build back up to more complex cases.
Completely coincidentally, I had a meeting yesterday with one of my advisees, who has recently been taking a couple of chemistry courses to fulfill our graduation requirement that students take two science courses outside the physics major. I asked him how that was going, and he said he’s found it intensely frustrating, because their treatment of the systems they’re studying stops at a higher level of abstraction than he would like. “I keep asking ‘Yes, but why does that happen?’ I think I’m annoying the other students and the professors.”
This neatly mirrors a story a colleague told some years back about how he wound up in physics after going to college planning to major in chemistry. He said he kept asking the same sort of questions– “Yes, but why do p shells contain six electrons?” Eventually, he got to the class that was supposed to explain everything, which started by just writing the Schrödinger equation on the blackboard, and declaring that its solutions provided the explanation for everything. At which point he asked “Yes, but where the hell does that equation come from?” and ended up taking physics in an effort to find out. Ironically, most physics classes don’t do much to explain where the Schrödinger equation comes from, either– I remember my undergrad quantum prof spending some time making analogies to wave and diffusion equations, but nothing approaching a derivation. He found the general approach much more congenial, though, and switched career paths as a result.
I’m not relating these simply to take Rutherfordian shots at the stamp-collecting sciences, but because I think there really is a difference in the approach, and that approach can be valuable to see. Physics is more fundamentally reductionist than most other sciences, and much more likely to abstract away inconvenient details in search of universal rules. This can make physics as frustrating for students from other fields as chemistry is for some physics students– I got a student evaluation comment a few years back complaining that we “approximated away all the interesting stuff, like friction and air resistance.” Which struck me as funny, because in my world, friction and air resistance aren’t interesting– they’re mathematically messy and inelegant, and obscure the universal rules that are the real point of physics. But for somebody who wants to be the right sort of engineer, those are the interesting points.
Some time ago, I read something about science communication where a researcher mentioned the “five-year-old game.” The claim was that most people, even Ph.D. scientists can easily be reduced to sputtering incoherence if you ask them to explain something and then keep asking “Why?” like a five-year-old. In some sense, then, physics students like my frustrated advisee or my ex-chemist colleague are eternally five years old, always asking “Why does that happen?” to the increasing annoyance of people who are more inclined to stop with a slightly higher level of approximation in order to accomplish some particular useful task.
I think it’s useful, in a cultural sense if nothing else, for students to see both sides of that process. The engineers and chemists ought to understand the physics approach of getting to the basic, universal rules that underlie everything, and the physicists ought to get a little experience working with non-ideal cases. And the ones who find even physics too messy and approximate can go on to become math majors…
1. #1 andre
February 15, 2013
“why we have physicists teach intro physics courses that are primarily designed to serve other departments”
Chemists feel your pain. In undergrad at a small liberal arts school, the organic classes were routinely distilled down from 80+ students to between 2 and 10 chemistry majors. This isn’t even counting the numbers in the first year courses (it’s typical for most students drop premed after taking intro chem, not organic).
The main reason the chemistry department exists at small schools is so that the biology department doesn’t have to see hopeful premeds cry. (And yes, the point of the physics department is it’s a place to send students who keep asking us chemists “Why?”)
2. #2 Alex
February 15, 2013
Strangely enough, theorists are often the ones who get the most sputtering and incoherent with “why” questions. I’m a theorist of the scaling arguments and simulations type. I know that postulates are postulates. When I pose enough “why?” questions to some of my more abstract colleagues, they eventually get down to “Well, it’s a consequence of [some symmetry].” And then you ask why that is and they say something very mathematical. And then you ask why the universe follows those rules and they think you’re dumb and don’t get the math. And then you eventually drill into it enough and they eventually admit they don’t know.
The other fun game I play with them is to take classical physics on its own terms, and try to understand the logical foundations of something without resorting to quantum (or, in some cases, relativistic) explanations. They always want to invoke that, and my reply is always that classical physics makes the predictions that it makes, and so those predictions should be understandable as consequences of some idea or assumption embedded in classical physics. This often makes them sputter…until they concede that I’ve found something cute.
3. #3 Eric Lund
February 15, 2013
@andre: It isn’t always chemistry professors who have to weed out the pre-meds. Pre-meds are also required to take a year of physics (though not necessarily the same course that engineers have to take). When this sequence is distinct from the traditional intro physics sequence, it is even more hated and feared among physics faculty than the traditional sequence. I had the dubious pleasure of TAing such a course as a grad student, and if I ever am involved with such a course in the future, it will be too soon. One of my students in that lab called his physics lab disk (floppy, at the time) “I Hate Physics”. You are never going to reach a student like that.
Part of my objection to pre-med physics is that you are expected to teach physics without recourse to calculus, and the people who end up in such a course are likely to be afraid of algebra, too. It’s a lot harder to explain physics clearly without calculus. Doing so without algebra becomes Mission Impossible. I’m also of the opinion that a person who cannot handle algebra/geometry should not be considered well-educated, but that’s a different rant.
4. #4 Peter Morgan
February 15, 2013
I react against some of this. Perhaps it’s that “Why?” is a technique first applied by 3-year-olds. “Why?” questions can be constructive, but they too often seem cheap shots. I suppose that a TV interviewer who asked nothing more than “why?” would lose our interest. Although one is always trying to step outside formulas, I currently prefer questions like “has anyone done it differently?”, “does it have to be done that way?”, “what would we have to change so we could do it differently?”, “can the idea be presented in a different way that might be more useful?”. The literature will provide answers like “this, this, and this textbook, review article, and research paper”, our own research provides others.
5. #5 tcmJOE
February 15, 2013
But having posed the question, some responses:
– Your engineering PhD is a very distinct creature than your car-engine designing engineer. I doubt that the course would lose all of its high-level-mindedness.
–And I’d be very interested to see the sort of abstraction your theoretical engineer would give. Probably a lot of “flow” of energy and momentum.
– And even if it does (somewhat), isn’t Hamiltonian/Lagrangian/Griffith’s E&M the perfect place to really start hammering the very abstract mindset?
– I’d also argue that a bit more of an engineering perspective would do physicists good. It would be pretty vital for anyone going into experimental work and one could really talk about successive approximations.
Though honestly, the biggest thing I think we’re missing in first year physicists is some basic numerical work, and a discussion of what a computer does and does not tell us. I’m thinking along the lines of doing some really basic ODE stuff with Mathematica. Physics, engineering, whatever: you’re going to have to be facile with a computer, and the sooner you start learning that the better off you’ll be.
6. #6 Mauro
February 15, 2013
As a physicist I am… you made me a very big compliment and gave me a very big pleasure with this article.
7. #7 Schlupp
February 15, 2013
I thought, it was three. But admittedly, that may be theorists; experimentalist may need more advanced motor skills.
8. #8 andre
February 15, 2013
@Eric Lund: Oh we chemists know the physicists help with the premeds. You also help weed out the chemists who lack any math skill. We thank all of you.
Chad: I’d be interested in knowing what sort of “why?” questions your student had that weren’t addressed in a chemistry course. Normally, these boil down to questions someone would know if they studied physics (that’s the wall I butt up against most). I feel most courses in modern chemistry are ones that a physicist could enjoy (unless you’re talking biochem or someone teaching organic like it was the 1980s, in which case, I understand the frustration).
9. #9 Rhett
February 15, 2013
When I get to the “but why?” game, I quickly resort to – well, that is what we have that agrees with experimental evidence. Boom. Done.
Physics isn’t about the truth, physics is about models. If the model agrees with evidence then we use it.
I remember the first time this came up. A student wanted to know how to derive Coulomb’s law. I ended up showing the student our experimental set up for measuring the coulomb force. It was fun.
10. #10 AliceB
February 15, 2013
It reminds me of the classic xkcd comic titled Purity:
11. #11 Bee
February 16, 2013
Why questions are easy to ask but often not very fruitful. One of the most important things to learn is to ask good questions. Be that as it may, I ended up in physics in a very similar way. I recall finding physics in high school terribly frustrating because they dumped on us a big pile of loosely related equations, not all of which evidently were necessary, and it didn’t make any sense to me. Frustratingly, when I asked my teachers, they would just go in circles and explain one “law” with some other “law”. I ended up trying to read textbooks that were at this point pretty much incomprehensible because all the math wasn’t introduced. (The situation dramatically improved with a boyfriend who was a master’s student in math ;o)). You can easily see how I ended up where I am with that initial condition…
12. #12 Bee
February 16, 2013
PS: To follow up on Alice’s comment, I am reminded of this :o)
13. #13 Alex
February 16, 2013
BTW, my wife says I’m an eternal five year-old, so I guess physics is the perfect career for me.
14. #14 Bruce W Fowler
February 17, 2013
This goes back to your earlier about no one doing classical mechanics (you said Newtonian but I’m bending) any more. You have to develop action-angle variables to derive Schrodinger’s equation. And you don’t do that at undergraduate level. Remember what Pope Leo (forget the number) told Anselm.
15. #15 Ron
February 17, 2013
Reminds me of this old amazon review:
“Anyone who’s been around children (or been a child themselves) knows about the “why?” game. It starts out with something like this: “Daddy (or Mommy), why is the sky blue?” So you explain about Rayleigh scattering and the fact that molecules in the atmosphere scatter photons with an efficiency that’s inversely proportional to the fourth power of the wavelength. You are hardly finished when the next question shoots across your bow: Daddy (or Mommy) why is there an atmosphere?” So you dutifully explain planetary evolution, the expulsion of vast quantities of carbon dioxide that facilitated the evolution of life forms that exploit photosynthesis, producing oxygen, etc. Then the third question comes “Daddy (or Mommy) why do planets form?” You follow this question with a short lecture on the planetary nebular hypothesis. But the questions don’t stop; they just keep coming and coming and coming. There is, it seems, never an answer that cannot be followed with “why?”
New comments have been temporarily disabled. Please check back soon. |
ec58c162f0aa7dab | GPGPU with WebGL: solving Laplace’s equation
This is the first post in what will hopefully be a series of posts exploring how to use WebGL to do GPGPU (General-purpose computing on graphics processing units). In this installment we will solve a partial differential equation using WebGL, the Laplace’s equation more specifically.
Discretizing the Laplace’s equation
The Laplace’s equation, \nabla^2 \phi = 0, is one of the most ubiquitous partial differential equations in physics. It appears in lot of areas, including electrostatics, heat conduction and fluid flow.
To get a numerical solution of a differential equation, the first step is to replace the continuous domain by a lattice and the differential operators with their discrete versions. In our case, we just have to replace the Laplacian by its discrete version:
\displaystyle \nabla^2 \phi(x) = 0 \rightarrow \frac{1}{h^2}\left(\phi_{i-1\,j} + \phi_{i+1\,j} + \phi_{i\,j-1} + \phi_{i\,j+1} - 4\phi_{i\,j}\right) = 0,
where h is the grid size.
If we apply this equation at all internal points of the lattice (the external points must retain fixed values if we use Dirichlet boundary conditions) we get a big system of linear equations whose solution will give a numerical approximation to a solution of the Laplace’s equation. Of the various methods to solve big linear systems, the Jacobi relaxation method seems the best fit to shaders, because it applies the same expression at every lattice point and doesn’t have dependencies between computations. Applying this method to our linear system, we get the following expression for the iteration:
\displaystyle \phi_{i\,j}^{(k+1)} = \frac{1}{4}\left(\phi_{i-1\,j}^{(k)} + \phi_{i+1\,j}^{(k)} + \phi_{i\,j-1}^{(k)} + \phi_{i\,j+1}^{(k)}\right),
where k is a step index.
Solving the discretized problem using WebGL shaders
If we use a texture to represent the domain and a fragment shader to do the Jacobi relaxation steps, the shader will follow this general pseudocode:
1. Check if this fragment is a boundary point. If it’s one, return the previous value of this point.
2. Get the four nearest neighbors’ values.
3. Return the average of their values.
To flesh out this pseudocode, we need to define a specific representation for the discretized domain. Taking into account that the currently available WebGL versions don’t support floating point textures, we can use 32 bits RGBA fragments and do the following mapping:
R: Higher byte of \phi.
G: Lower byte of \phi.
B: Unused.
A: 1 if it’s a boundary value, 0 otherwise.
Most of the code is straightforward, but doing the multiprecision arithmetic is tricky, as the quantities we are working with behave as floating point numbers in the shaders but are stored as integers. More specifically, the color numbers in the normal range, [0.0, 1.0], are multiplied by 255 and rounded to the nearest byte value when stored at the target texture.
My first idea was to start by reconstructing the floating point numbers for each input value, do the required operations with the floating numbers and convert the floating point numbers to color components that can be reliably stored (without losing precision). This gives us the following pseudocode for the iteration shader:
// wc is the color to the "west", ec is the color to the "east", ...
float w_val = wc.r + wc.g / 255.0;
float e_val = ec.r + ec.g / 255.0;
// ...
float val = (w_val + e_val + n_val + s_val) / 4.0;
float hi = val - mod(val, 1.0 / 255.0);
float lo = (val - hi) * 255.0;
fragmentColor = vec4(hi, lo, 0.0, 0.0);
The reason why we multiply by 255 in place of 256 is that we need val_lo to keep track of the part of val that will be lost when we store it as a color component. As each byte value of a discrete color component will be associated with a range of size 1/255 in its continuous counterpart, we need to use the “low byte” to store the position of the continuous component within that range.
Simplifying the code to avoid redundant operations, we get:
float val = (wc.r + ec.r + nc.r + sc.r) / 4.0 +
(wc.g + ec.g + nc.g + sc.g) / (4.0 * 255.0);
float lo = (val - hi) * 255.0;
The result of running the full code, implemented in GLSL, is:
Solving the Laplace's equation using a 32x32 grid. Click the picture to see the live solving process (if your browser supports WebGL).
As can be seen, it has quite low resolution but converges fast. But if we just crank up the number of points, the convergence gets slower:
Incompletely converged solution in a 512x512 grid. Click the picture to see a live version.
How can we reconcile these approaches?
The basic idea behind multigrid methods is to apply the relaxation method on a hierarchy of increasingly finer discretizations of the problem, using in each step the coarse solution obtained in the previous grid as the “starting guess”. In this mode, the long wavelength parts of the solution (those that converge slowly in the finer grids) are obtained in the first coarse iterations, and the last iterations just add the finer parts of the solution (those that converge relatively easily in the finer grids).
The implementation is quite straightforward, giving us fast convergence and high resolution at the same time:
Multigrid solution using grids from 8x8 to 512x512. Click the picture to see the live version.
It’s quite viable to use WebGL to do at least basic GPGPU tasks, though it is, in a certain sense, a step backward in time, as there is no CUDA, floating point textures or any feature that helps when working with non-graphic problems: you are on your own. But with the growing presence of WebGL support in modern browsers, it’s an interesting way of partially accessing the enormous computational power present in modern video cards from any JS application, without requiring the installation of a native application.
In the next posts we will explore other kinds of problem-solving where WebGL can provide a great performance boost.
5 thoughts on “GPGPU with WebGL: solving Laplace’s equation
1. Evgeny says:
Very nice application. There are floating point textures in the nightly Chrome (for about 2 months)
There is “The Energy2D Simulator” open source Java based project
with very nice turbulent flows (3-5 applets). They used implicit scheme and relaxation. You could move in this directions too :)
• mchouza says:
You can see a more complex example of the same techniques in this (not very accurate and still unfinished) simulation of the two slits experiment with the Schrödinger equation:
In my next posts I will probably transition to floating point textures for this kind of simulations, as working with the combination of integer textures and floating point values in the shaders is quite painful :-D
Thanks for your comment and your very interesting website!
2. […] This is very cool indeed — GPGPU with WebGL: solving Laplace’s equation […]
3. […] In a previous post we solved Laplace’s Equation using WebGL. We will see how to implement the Lattice Boltzmann algorithm using WebGL shaders in the next post, but this post has a preview of the solution: Click on the image to go to the demo. New obstacles can be created by dragging the mouse over the simulation area. […]
4. […] method is introduced with WebGL demos in this blog. Demidov wrote something about Multigrid recently. Real-Time Gradient-Domain Painting is an […]
Leave a Reply
You are commenting using your account. Log Out / Change )
Twitter picture
Facebook photo
Google+ photo
Connecting to %s |
fa8367798995eaa5 | Time Crystals
Crystals of Time
Jakub ZakrzewskiMarian Smoluchowski Institute of Physics, Jagiellonian University, 30-059 Krakow, Poland
Published October 15, 2012 | Physics 5116 (2012) | DOI: 10.1103/Physics.5.116
Classical Time Crystals
Alfred Shapere and Frank Wilczek
Published October 15, 2012 | PDF (free)
Quantum Time Crystals
Frank Wilczek
Published October 15, 2012 | PDF (free)
Space-Time Crystals of Trapped Ions
Published October 15, 2012 | PDF (free)
+Enlarge imageFigure 1
T. Li et al., Phys. Rev. Lett. (2012)
Figure 1 (a) A time crystal has periodic structures both in space and time. Particles arranged in a periodic pattern in space rotate in one direction even at the lowest energy state, determining periodicity in time. (b) An experimental realization of a time crystal proposed by Li et al. uses ultracold ions confined in a ring-shaped trapping potential. The ions form a periodic structure in space and, under a weak magnetic field, they move along the ring, creating a time crystal.
Spontaneous symmetry breaking is ubiquitous in nature. It occurs when the ground state (classically, the lowest energy state) of a system is less symmetrical than the equations governing the system. Examples in which the symmetry is broken in excited states are common—one just needs to think of Kepler’s elliptical orbits, which break the spherical symmetry of the gravitational force. But spontaneous symmetry breaking refers instead to a symmetry broken by the lowest energy state of a system. Well-known examples are the Higgs boson (due to the breaking of gauge symmetries), ferromagnets and antiferromagnets, liquid crystals, and superconductors. While most examples come from the quantum world, spontaneous symmetry breaking can also occur in classical systems [1].
Three articles in Physical Review Letters investigate a fascinating manifestation of spontaneous symmetry breaking: the possibility of realizing time crystals, structures whose lowest-energy states are periodic in time, much like ordinary crystals are periodic in space. Alfred Shapere at the University of Kentucky, Lexington, and Frank Wilczek at the Massachusetts Institute of Technology, Cambridge [2], provide the theoretical demonstration that classical time crystals can exist and, in a separate paper, Wilczek [3] extends these ideas to quantum time crystals. Tongcang Li at the University of California, Berkeley, and colleagues [4] propose an experimental realization of quantum time crystals with cold ions trapped in a cylindrical potential.
In nature, the most common manifestation of spontaneous symmetry breaking is the existence of crystals. Here continuous translational symmetry in space is broken and replaced by the discrete symmetry of the periodic crystal. Since we have gotten used to considering space and time on equal footing, one may ask whether crystalline periodicity can also occur in the dimension of time. Put differently, can time crystals—systems with time-periodic ground states that break translational time symmetry—exist? This is precisely the question asked by Alfred Shapere and Frank Wilczek.
How can one create a time crystal? The key idea of the authors, both for the classical and quantum case, is to search for systems that are spatially ordered and move perpetually in their ground state in an oscillatory or rotational way, as shown in Fig. 1. In the time domain, the system will periodically return to the same initial state.
Consider first the classical case. At first glance, it may seem impossible to find a system in which the lowest-energy state exhibits periodic motion: in classical mechanics the energy minimum is normally found for vanishing derivatives of positions (velocities) and momenta. However, Shapere and Wilczek [2] find a mathematical way out of this impasse. Assuming a nonlinear relation between velocity and momentum, they show that the energy can become a multivalued function of momentum with cusp singularities, with a minimum at nonzero velocities. While this provides a mathematical solution for creating classical time crystals, the authors fall short of identifying candidate systems. It remains to be seen if such an exotic velocity-momentum relation can be engineered in a real system.
For once, the quantum case seems to be easier than its classical counterpart. A number of familiar quantum phenomena almost do the trick, resulting in systems that rotate or oscillate in their lowest energy state. Wilczek suggests the example of a supercoducting ring, which can support a permanent current in its ground state under proper conditions. An even closer analogy can be found in a continuous wave laser. Spontaneous symmetry breaking makes the electric-field amplitude oscillate in time with a well-defined phase [5], almost creating a photonic time crystal. Yet in these systems—so close to being quantum crystals—a key element is missing: the persistent superconducting current and the laser light intensity are constant, not periodically varying, and the translational symmetry in time is not broken. How can one then add time periodicity to a quantum system?
Wilczek argues that this could be done in a system of quantum particles moving along a ring by introducing a mechanism that localizes them. If moving particles can be made to group in ordered “lumps,” this would naturally result in temporal periodicity as such lumps travel in a circle. Consider a ring filled with a large number of bosons with attractive interactions between them. If the system is isolated, its ground state is a symmetric state of constant density along the ring. But such a state is fragile: any interaction with the environment or any measurement (e.g., the determination of the position of an individual particle) makes the system collapse into a well-localized state along the ring, causing spontaneous symmetry breaking in space. Such localization can form a so-called soliton [6], a solution of the nonlinear Schrödinger equation that describes such a system. Wilczek’s insight is that an applied magnetic field, perpendicular to the ring, will cause the soliton to move. The resulting periodic motion would create a time crystal.
Wilczek does not address the problem of how to engineer such a system. But possible simple solutions come to mind. One could use cold neutral atoms with weak mutual attraction and exploit atom-laser interactions to create forces that mimic a magnetic field. Such a scheme to create an artificial effective magnetic field has been already realized in the laboratory [7]. An even simpler possibility is to stir an atomic ensemble while it is cooled towards Bose-Einstein condensation in an appropriate ring-shaped trap. Indeed, a stirring laser beam was previously used to create vortices in a condensate held in a magnetic trap [8]. Here, the stirring laser would introduce a rotation into the system, driving the soliton’s movement.
The article by Li et al. [4] provides the detailed description of an experiment that seems to be feasible. The scheme is based on beryllium ions trapped in a ring-shaped potential at nanokelvin temperatures. As a consequence of mutual Coulomb repulsion, the ions arrange periodically in space, forming a ring crystal. Similar geometries have already been demonstrated by the group of David Wineland [9]. Li and co-workers show that the addition of a weak magnetic field perpendicular to the ring would lead to the rotation of the spatially periodic ring crystal structure, thus creating a time crystal. Similarly to Wilczek’s model, spontaneous symmetry breaking of the rotational degree of freedom, through circular movement, is translated into breaking of translational time invariance.
Time crystals may sound dangerously close to a perpetual motion machine, but it is worth emphasizing one key difference: while time crystals would indeed move periodically in an eternal loop, rotation occurs in the ground state, with no work being carried out nor any usable energy being extracted from the system. Finding time crystals would not amount to a violation of well-established principles of thermodynamics. If they can be created, time crystals may have intriguing applications, from precise timekeeping to the simulation of ground states in quantum computing schemes. But they may be much more than advanced devices. Could the postulated cyclic evolution of the Universe be seen as a manifestation of spontaneous symmetry breaking akin to that of a time crystal? If so, who is the observer inducing—by a measurement—the breaking of the symmetry of time?
1. F. Strocchi, Symmetry breaking, Lecture Notes in Physics (Springer, Heidelberg, 2008)[Amazon][WorldCat].
2. A. Shapere and F. Wilczek, “Classical Time Crystals,” Phys. Rev. Lett. 109, 160402 (2012).
3. F. Wilczek, “Quantum Time Crystals,” Phys. Rev. Lett. 109, 160401 (2012).
4. T. Li, Z-X. Gong, Z-Q. Yin, H. T. Quan, X. Yin, P. Zhang, L-M. Duan, and X. Zhang, “Space-Time Crystals of Trapped Ions,” Phys. Rev. Lett. 109, 163001 (2012).
5. H. Haken, Synergetics: An Introduction (Springer-Verlag, Berlin, 1977)[Amazon][WorldCat].
6. R. Kanamoto, H. Saito, and M. Ueda, ”Critical Fluctuations in a Soliton Formation of Attractive Bose-Einstein Condensates,” Phys. Rev. A 73, 033611 (2006).
7. Y.-J. Lin, R. L. Compton, K. Jiménez-García, J. V. Porto, and I. B. Spielman, “Synthetic Magnetic Fields for Ultracold Neutral Atoms,” Nature 462, 628 (2009).
8. K. W. Madison, F. Chevy, W. Wohlleben, and J. Dalibard, “Vortex Formation in a Stirred Bose-Einstein Condensate,” Phys. Rev. Lett. 84, 806 (2000).
9. M. G. Raizen, J. M. Gilligan, J. C. Bergquist, W. M. Itano, and D. J. Wineland, “Ionic Crystals in a Linear Paul trap,” Phys. Rev. A 45, 6493 (1992).
About the Author: Jakub Zakrzewski
Jakub Zakrzewski
Jakub Zakrzewski is a Full Professor and the Head of the Atomic Optics Department at the Marian Smoluchowski Institute of Physics, Jagiellonian University in Krakow, Poland. He also leads QuantLab at the Mark Kac Complex Systems Research Centre. Over the years his research has explored quantum optics, laser theory, quantum chaos in atomic systems, and cold gases in optical lattices, especially in the presence of disorder. He worked at the University of Southern California, Los Angeles, and spent several years at the Laboratoire Kastler Brossel of the École Normale Supérieure and University of Paris 6. For more information visit http://chaos.if.uj.edu.pl/~kuba
Previous Viewpoint | Next Viewpoint
Leave a Reply
WordPress.com Logo
Twitter picture
Facebook photo
Google+ photo
Connecting to %s |
7b08a1216d494b74 | Special Articles - Committee for Skeptical Inquiry http://www.csicop.org/ en Copyright 2015 2015-07-29T18:47:12+00:00 Buddy Can You Paradigm? Fri, 01 Sep 2000 13:20:00 EDT info@csicop.org () http://www.csicop.org/sb/show/buddy_can_you_paradigm http://www.csicop.org/sb/show/buddy_can_you_paradigm A common view is that science progresses by a series of abrupt changes in which new scientific theories replace old ones that are “proven wrong” and never again see the light of day. Unless, as John Horgan has suggested, we have reached the ”end of science,” every theory now in use, such as evolution or gravity, seems destined to be overturned. If this is true, then we cannot interpret any scientific theory as a reliable representation of reality.
While this view of science originated with philosopher Karl Popper, its current widespread acceptance is usually imputed to Thomas Kuhn, whose The Structures of Scientific Revolutions (1962) was the best selling academic book of the twentieth century, and probably also the most cited.
Kuhn alleged that science does not progress gradually but rather through a series of revolutions. He characterized these revolutions with the now famous and overworked term paradigm shifts in which the old problem-solving tools, the “paradigms” of a discipline are replaced by new ones. In between revolutions, not much is supposed to happen. And after the revolution, the old paradigms are largely forgotten.
Being a physicist by training, Kuhn focused mainly on revolutions in physics. One of the most important examples he covered was the transition from classical mechanics to quantum mechanics that occurred in the early 1900s. In quantum mechanics, the physicist calculates probabilities for particles following certain paths, rather than calculating the exact paths themselves as in classical mechanics.
True, this constitutes a different procedure. But has classical mechanics become a forgotten tool, like the slide rule? Hardly. Except for computer chips, lasers, and a few other special devices, most of today’s hightech society is fully explicable with classical physics alone. While quantum mechanics is needed to understand basic chemistry, no special quantum effects are evident in biological mechanisms. Thus, most of what is labeled natural science in today’s world still rests on a foundation of Newtonian physics that has not changed much, in basic principles and methods, for centuries.
Nobel physicist Steven Weinberg, who was a colleague of Kuhn’s at Harvard and originally admired his work, has taken a retrospective look at Structures. In an article on the October 8, 1998, New York Review of Books called “The Revolution That Didn't Happen,” Weinberg writes:
It is not true that scientists are unable to “switch back and forth between ways of seeing,” and that after a scientific revolution they become incapable of understanding the science that went before it. One of the paradigm shifts to which Kuhn gives much attention in Structures is the replacement at the beginning of this century of Newtonian mechanics by the relativistic mechanics of Einstein. But in fact in educating new physicists the first thing that we teach them is still good old Newtonian mechanics, and they never forget how to think in Newtonian terms, even after they learn about Einstein’s theory of relativity. Kuhn himself as an instructor at Harvard must have taught Newtonian mechanics to undergraduates.
Weinberg maintains that the last “mega-paradigm shift” in physics occurred with the transition from Aristotle to Newton, which actually took several hundred years: “[N]othing that has happened in our understanding of motion since the transition from Newtonian to Einsteinian mechanics, or from classical to quantum physics fits Kuhn’s description of a ‘paradigm shift.'”
While tentative proposals often prove incorrect, I cannot think of a single case in recent times where a major physical theory that for many years has successfully described all the data within a wide domain was later found to be incorrect in the limited circumstances of that domain. Old, standby theories are generally modified, extended, often simplified with excess baggage removed, and always clarified. Rarely, if ever, are such well-established theories shown to be entirely wrong. More often the domain of applicability is refined as we gain greater knowledge or modifications are made that remain consistent with the overall principles.
This is certainly the case with Newtonian physics. The advent of relativity and quantum mechanics in the twentieth century established the precise domain for physics that had been constructed up to that point, but did not dynamite that magnificent edifice. While excess baggage such as the aether and phlogiston was cast off, the old methods still exist as smooth extrapolations of the new ones to the classical domain. The continued success and wide application of Newtonian physics must be viewed as strong evidence that it represents true aspects of reality, that it is not simply a human invention.
Furthermore, the new theories grew naturally from the old. When you look in depth at the history of quantum mechanics, you have to conclude it was not the abrupt transition from classical mechanics usually portrayed. Heisenberg retained the classical equations of motion and simply represented observables by matrices instead of real numbers. Basically, all he did was make a slight modification to the algebraic rules of mechanics by relaxing the commutative law. Quantization then arose from assumed commutation rules that were chosen based on what seemed to work. Similarly, the Schrödinger equation was derived from the classical Hamilton-Jacobi equation of motion. These were certainly major developments, but I maintain they were more evolutionary than revolutionary.
Where else in the history of science to the present can we identify significant paradigm shifts? With Darwin and Mendel, certainly, in biology. But what in biology since then? Discovering the structure of DNA and decoding the genome simply add to the details of the genetic mechanism that are being gradually enlarged without any abrupt change in the basic naturalistic paradigm.
A kind of Darwinian graduated evolution characterizes the development of science and technology. That is not to say that change is slow or uniform, in biological or social systems. The growth of science and technology in recent years has been quick but not instantaneous and still represents a relatively smooth extension of what went before.
SB: What are the main challenges to skepticism in Peru?
SB: What about the reaction from your students?
SB: How did you get involved in the skeptical movement?
|
538d68bcd89e4f81 | Publication Details
Louis, S., Marchant, T. R. & Smyth, N. F. (2013). Optical solitary waves in thermal media with non-symmetric boundary conditions. Journal of Physics A: Mathematical and Theoretical, 46 (5), 055201-1-055201-21.
Optical spatial solitary waves are considered in a nonlocal thermal focusing medium with non-symmetric boundary conditions. The governing equations consist of a nonlinear Schrödinger equation for the light beam and a Poisson equation for the temperature of the medium. Three numerical methods are investigated for calculating the ground and excited solitary wave solutions of the coupled system. It is found that the Newton conjugate gradient method is the most computationally efficient and versatile numerical technique. The solutions show that by varying the ambient temperature, the solitary wave is deflected towards the warmer boundary. Solitary wave stability is also examined both theoretically and numerically, via power versus propagation constant curves and numerical simulations of the governing partial differential equations. Both the ground and excited state solitary waves are found to be stable. The Newton conjugate gradient method should also prove extremely useful for calculating solitary waves of other related optical systems, which support nonlocal spatial solitary waves, such as nematic liquid crystals. © 2013 IOP Publishing Ltd.
Link to publisher version (DOI) |
1384e539dd8884d8 |
Red phosphorus (RP) has attracted more attention as a promising sodium storage material due to its ultra-high theoretical capacity, suitable sodiation potential. However, the low intrinsic electrical conductivity and large volume change of pristine RP lead to high polarization and fast capacity fading during cycling. Herein, surface synergistic protections on red phosphorus composite is successfully proposed by conductive poly (3, 4-ethylenedioxythiophene) (PEDOT) coating and electrolyte strategy. Nanoscale RP is confined in porous carbon skeleton and the outside is packaged by PEDOT coating via in-situ polymerization. Porous carbon provides rich access pathways for rapid Na+ diffusion and empty spaces accommodate the volume expansion of RP; PEDOT coating isolates the direct contact between electrolyte and active materials to form a stable solid electrolyte interphase. In addition, the reformulated electrolyte with 3 wt% SbF3 additive can stabilize the electrode surface and thus enhance the electrochemical performance, especially cycling stability and rate capability (433 mAh g-1 at high current density of 10 A g-1).
Metal-halide perovskite solar cells (PSCs) have attracted considerable attention during the past decade. However, due to the existence of non-radiative recombination losses, the best power conversion efficiency (PCE) is still lower than the theoretical limit defined by shockley-Queser theory. In this work, we investigate1,2,3-oxathiazin-4(3h)-one,6-methyl-2,2-dioxide (Acesulfame Potassium, abbreviated as AK) as a additional dopant for the 2,2′,7,7′-Tetrakis(N,N-di-p-methoxyphenyl-amine)-9,9′-spirobifluorene (Spiro-OMeTAD) and fabricate PSCs in the air. It is found that 12 mol % fraction of AK relative to lithium bis((trifluoromethyl)sulfonyl)-amide (Li-TFSI) reduced the non-radiative recombination from 86.05 % to 69.23 %, resulting in an average 0.08 V enhancement of Voc. The champion solar cell gives a PCE up to 21.9% and over 84% retention of the initial value during 720 h aging in dry air with 20%~30% humidity.
In studies of ion channel systems, due to the huge computational cost of polarizable force fields, classical force fields remain the most widely used for a long time. In this work, we used the AMOEBA polarizable atomic multipole force field in enhanced sampling simulations of single-channel gA and double-channel gA systems and investigated its reliability in characterizing ion-transport properties of the gA (Gramicidin A) ion channel under dimerization. The influence of gA dimerization on the permeation of potassium and sodium ions through the channel was described in terms of conductance, diffusion coefficient, and free energy profile. Results from the polarizable force field simulations show that the conductance of potassium and sodium ions passing through the single- and double-channel agrees well with experimental values. Further data analysis reveals the molecular mechanism of protein dimerization affecting the ion-transport properties of gA channels, i.e., protein dimerization accelerates the permeation of potassium and sodium ions passing through the double-channel by adjusting the environment around gA protein (the distribution of phospholipid head groups, ions outside the channel and bulk water), rather than directly adjusting the conformation of gA protein.
We have investigated the adsorption of 9 different adatoms on the (111) and (100) surfaces of Iridium (Ir) using first principles density functional theory. The study explores surface functionalization of Ir which would provide important information for further study towards investigating its functionality in catalysis and other surface applications. The adsorption energy, stable geometry, density of states and magnetic moment are the physical quantities of our interest. Strong hybridization between the adsorbates and the substrate electronic states revealed to impact the adsorption, while the magnetic moment of the adsorbates found to be suppressed. In general, stronger binding is observed on the (100) surface.
Based on density functional theory (DFT), a new silicon allotrope (C2-Si) is proposed in this work. The mechanical stability and dynamic stability of C2-Si are examined based on the elastic constants and phonon spectrum. According to the BH/GH values, C2-Si has ductility under ambient pressure; compared with Si64, Si96, I4/mmm and h-Si6, C2-Si is less brittle. Within the Heyd-Scuseria-Ernzerhof (HSE06) hybrid functional, C2-Si is an indirect narrow band gap semiconductor, and the band gap of C2-Si is only 0.716 eV, which is approximately two-thirds that of c-Si. The ratios of the maximum and minimum values of the Young's modulus, shear modulus and Poisson's ratio in their 3D spatial distributions for C2-Si are determined to characterize the anisotropy. In addition, the anisotropy in different crystal planes is also investigated via 2D representations of the Young’s modulus, shear modulus and Poisson’s ratio. In addition, among more than ten silicon allotropes, C2-Si has the strongest absorption ability for visible light.
The aggregation of Perylene Diimide (PDI) and its derivatives strongly depends on the molecular structure and therefore has a great impact on the excited states. By regulating the molecular stacking such as monomer, dimer, J- and/or H-aggregate, the formation of different excited states is adjustable and controllable. In this study, we have synthesized two kinds of PDI derivatives - undecane-substituted PDI (PDI-1) and diisopropylphenyl-substituted PDI (PDI-2), and the films are fabricated with spin-coating method. By employing photoluminescence (PL), time-resolved photoluminescence (TRPL), and transient absorption (TA) spectroscopy, the excited-state dynamics of two PDI amorphous films have been investigated systematically. The result reveals that both films have formed excimer after photoexcitation mainly due to the stronger electronic coupling among molecule aggregate in the amorphous film. It should be noted that the excited state dynamics in PDI-2 show a singlet fission like process, which is evidenced by the appearance of triplet state absorption. This study provides the dynamics of excited state in amorphous PDI films, and pave the way for better understanding and adjusting the excited state of amorphous films.
The pharmaceutically active compound atenolol, a kind of β-blockers, may result in adverse effects both for human health and ecosystems if it is excreted to the surface waters resources. To effectively remove atenolol in the environment, photodegradation, both direct and indirect, driven by sunlight is likely to play an important role. Among indirect photodegradation, singlet oxygen (1O2), as a pivotal reactive species, is likely to determine the fates of atenolol. Nevertheless, the kinetic information on the reaction of atenolol with singlet oxygen has not well been investigated and the reaction rate constant is still ambiguous. Herein, the reaction rate constant of atenolol with singlet oxygen is investigated directly through observing the decay of the 1O2 phosphorescence at 1270 nm. It is determined that the reaction rate constant between atenolol and 1O2 is 7.0×105 M-1 s-1 in D2O, 8.0×106 M-1 s-1 in ACN and 8.4×105 M-1 s-1 in EtOH, respectively. Furthermore, the solvent effects on the title reaction was also investigated. It is revealed that the solvents with strong polarity and weak hydrogen donating ability are suitable to achieve high rate constant values. These kinetics information on the reaction of atenolol with singlet oxygen may provide fundamental knowledge to the indirect photodegradation of β-blockers.
In order to reduce the impact of CdS photogenerated electron-hole recombination on its photocatalytic performance, a narrow band gap semiconductor MoS2 and organic macromolecular cucurbit[n]urils(Q[n]) were used to modify CdS. Q[n]/CdS-MoS2 (n=6, 7, 8) composite photocatalysts were synthesized by hydrothermal method. Infrared spectroscopy (FT-IR), X-ray diffraction (XRD), X-ray photoelectron spectroscopy (XPS), field emission scanning electron microscopy (SEM), ultraviolet-visible (UV-Vis) and photoluminescence spectrum (PL) were used to characterize the structure, morphology and optical properties of the products, and the catalytic degradation of the solutions of methylene blue, rhodamine B and crystal violet by Q[n]/CdS-MoS2 composite catalyst was investigated. The results showed that the Q[n] played a regulatory role on the growth and crystallization of CdS-MoS2 particles, Q[n]/CdS-MoS2 (n=6, 7, 8) formed flower clusters with petal-like leaves, the flower clusters of petal-like leaves increased the surface area and active sites of the catalyst, the Q[n]/CdS-MoS2 barrier width decreased, the electron-hole pair separation efficiency was improved in the Q[6]/Cds-MoS2. Q[n] makes the electron-hole pair to obtain better separation and migration. The Q[6]/CdS-MoS2 and Q[7]/CdS-MoS2 have good photocatalytic activity for methylene blue, and the catalytic process is based on hydroxyl radical principle.
The laser-induced fluorescence excitation spectra of UF have been recorded in the range of 17000−19000 cm-1 using two-dimensional spectroscopy. High resolution dispersed fluorescence spectra and fluorescence decay curves time-resolved fluorescence spectroscopy were also recorded. Three rotationally resolved bands have been intensively analyzed, and all bands were found to be derived from the ground state X(1)4.5 with a rotational constant of 0.23421 cm-1. The low-lying electronic states have been observed near 435 and 651 cm-1 in the dispersed fluorescence spectra, which were assigned as Ωʹ = 3.5, and 2.5, respectively. The vibrational constants for the X(1)4.5 and X(1)3.5 states have been calculated. The branching ratios of the dispersed fluorescence spectra for the [18.62]3.5, [17.72]4.5, and [17.65]4.5 states were reported. Radiative lifetimes of 332(9), 825(49), and 433(15) ns for the [18.62]3.5, [17.72]4.5, and [17.65]4.5 states were obtained by fitting the fluorescence decay curves time-resolved fluorescence spectroscopy, respectively. Transition dipole moments were performed using the branching ratios and the radiative lifetimes.
Dimension-controllable supramolecular organic frameworks (SOFs) with aggregation-enhanced fluorescence are hierarchically fabricated through the host-guest interactions of CB[8] and coumarin-modified tetraphenylethylene derivatives (TPEC). The three-dimensional (3D) layered SOFs could be constructed from the gradual stacking of two-dimensional (2D) mono-layered structures via simply regulating the self-assembly conditions including the culturing time and concentration. Upon light irradiation under the wavelength of 350 nm, the photodimerization of coumarin moieties occurred, which resulted in the transformation of the resultant TPECn/CB[8]4n 2D SOFs into robust covalently-connected 2D polymers with molecular thickness. Interestingly, the supramolecular system of TPECn/CB[8]4n exhibited intriguing multicolor fluorescence emission from yellow to blue from 0 to 24 hours at 365 nm irradiation, possessing potential applicability for cell imaging and photochromic fluorescence ink.
A slow and clean fluorine atom beam source is one of the essential components for the low-collision energy scattering experiment involving fluorine atom. In this paper, we describe a simple yet effective photolysis fluorine atom beam source based on ultraviolet laser photolysis, the performance of which was demonstrated by high-resolution time-of-flight spectra from the reactive scattering of F+HD→HF+D. This beam source paves the way for studies of low energy collisions with fluorine atoms.
The interaction of reactants with catalysts has always been an important subject for catalytic reactions. As a promising catalyst with versatile applications, titania has been intensively studied for decades. In this paper we have investigated the role of bridge bonded oxygen vacancy (O<sub>v</sub>) in methyl groups and CO adsorption on rutile TiO<sub>2</sub>(110) (R-TiO<sub>2</sub>(110)) with the temperature programmed desorption (TPD) technique. The results show a clear different tendency of the desorption of methyl groups adsorbed on bridge bonded oxygen (O<sub>b</sub>), and CO molecules on the five coordinate Ti<sup>4+</sup> sites (Ti<sub>5c</sub>) as the O<sub>v</sub> concentration changes, suggesting that the surface defects may have crucial influence on the absorption of species on different sites of R-TiO<sub>2</sub>(110).
Owing to their unique structural, electronic, and physico-chemical properties, molybdenum clusters are expected to play an important role in future nanotechnologies. However, their ground states are still under debate. In this study, the crystal structure analysis by particle swarm optimization (CALYPSO) approach is used for the global minima search, which is followed by first-principles calculations, to detect an obvious dimerization tendency in Mo<sub>n</sub> (n = 2-18) clusters when the 4<i>s</i> and 4<i>p</i> semicore states (SCS) are not regarded as the valence states. Further, the clusters with even number of atoms are usually magic clusters with high stability. However, after including the 4<i>s</i> and 4<i>p</i> electrons as valence electrons, the dimerization tendency exhibits a drastic reduction because the average hybridization indices H<sub>sp</sub>, H<sub>sd</sub>, and H<sub>pd</sub> are reduced significantly. Overall, this work reports new ground states of Mo<sub>n</sub> (n = 11, 14, 15) clusters and proves that SCS are essential for Mo<sub>n</sub> clusters.
Rational designs of electrocatalytic active sites and architectures are of great importance to develop cost-efficient non-noble metal electrocatalysts towards efficient oxygen reduction reaction (ORR) for high-performance energy conversion and storage devices. In this paper, active amorphous Fe-based nanoclusters (Fe NC) are elaborately embedded at the inner surface of balloon-like N-doped hollow carbon (Fe NC/C<sub>h</sub> sphere) as an efficient ORR electrocatalyst with an ultrathin wall of about 10 nm. When evaluated for electrochemical performance, Fe NC/C<sub>h</sub> sphere exhibits decent ORR activity with a diffusion-limited current density of ~5.0 mA cm<sup>-2</sup> and a half-wave potential of ~0.81 V in alkaline solution, which is comparable with commercial Pt/C and superior to Fe nanoparticles (NP) supported on carbon sheet (Fe NP/C sheet) counterpart. The electrochemical analyses combined with electronic structure characterizations reveal that robust Fe-N interactions in amorphous Fe nanoclusters are helpful for the adsorption of surface oxygen-relative species, and the strong support effect of N-doped hollow carbon is benefit for accelerating the interfacial electron transfer, which jointly contributes to improved ORR kinetics for Fe NC/C<sub>h</sub> sphere.
In order to search for high energy density materials, various 4, 8-dihydrodifurazano[3,4-b,e]pyrazine based energetic materials were designed. Density functional theory was employed to investigate the relationships between the structures and properties. The calculated results indicated that the properties of these designed compounds were influenced by the energetic groups and heterocyclic substituents. The –N3 energetic group was found to be the most effective substituent to improve the heats of formation of the designed compounds while the tetrazole ring/–C(NO2)3 group contribute much to the values of detonation properties. The analysis of bond orders and bond dissociation energies showed that the addition of –NHNH2, –NHNO2, –CH(NO2)3 and –C(NO2)3 groups will decrease the bond dissociation energies remarkably. Compounds A8, B8, C8, D8, E8, and F8 were finally screened as the potential candidates for high energy density materials since these compounds possess excellent detonation properties and acceptable thermal stabilities. Additionally, the electronic structures of the screened compounds were calculated.
Alkyl dinitrites have attracted attention as an important type of nitrosating agent and a pollution source in atmosphere. The reactivity and chemistry of alkyl dinitrites induced by the two ONO functional groups are relatively unknown. In this work, decomposition of 1,3-cyclohexane dinitrite and 1,4-cyclohexane dinitrite are studied by electron impact ionization mass spectroscopy (EI-MS). Apart from NO<sup>+</sup> (m/z 30), fragment ions m/z 43 and 71 are most abundant for the 1,3-isomer. On the other hand, fragments m/z 29, 57, 85 and 97 stand out in the EI-MS spectrum of 1,4-isomer. Possible dissociation mechanisms of the two dinitrites are investigated by theoretical calculations. The results reveal that the ring-opening of 1,3-cyclohexane dinitrite mainly starts from the intermediate ion (M-NO)<sup>+</sup> by cleavage of two αC-βC bonds. For 1,4-cyclohexane dinitrite, in addition to the decomposition via intermediate (M-NO)<sup>+</sup>, cleavage of βC-βC bonds can occur directly from the parent cation M<sup>+</sup>. The results will help to understand the structural related chemistry of alkyl dinitrites in atmosphere and in NO transfer process.
After binding to human serum albumin (HSA), bilirubin could undergo photo-isomerization and photo-induced cyclization process. The latter process would result the formation of a product named lumirubin. These photo induced behaviors are the fundamental of clinical therapy for neonatal jaundice. Previous studies have reported that the addition of long chain fatty acids is beneficial to the generation of lumirubin, yet no kinetic study have revealed the mechanism behind. In this study, how palmitic acid affects the photochemical reaction process of bilirubin in HSA is studied by using femtosecond transient absorption and fluorescence up-conversion techniques. With the addition of palmitic acid, the excited population of bilirubin prefers to return to its hot ground state (S0) through a 4 picosecond decay channel rather than the intrinsic ultrafast decay pathways (<1 picosecond). This effect prompts the Z-Z to E-Z isomerization at the S0 state and then further increase the production yield of lumirubin. This is the first time to characterization the promoting effect of long chain fatty acid in the process of phototherapy with femtosecond time resolution spectroscopy and the results can provide useful information to benefit the relevant clinical study.
The photochemical reaction of potassium ferrocyanide (K4Fe(CN)6}) exhibits excitation wavelength dependence and non-Kasha rule behavior. In this study, the excited-state dynamics of K4Fe(CN)6 were studied by transient absorption spectroscopy. Excited state electron detachment (ESED) and phtoaquation reactions were clarified by comparing the results of 260 , 320 , 340 , and 350 nm excitations. ESED is the path to generate a hydrated electron (eaq-). ESED energy barrier varies with the excited state, and it occurrs even at the first singlet excited state (1T1g). The 1T1g state shows ~0.2 ps lifetime and converts into triplet [Fe(CN)6]4- by intersystem crossing. Subsequently, 3[Fe(CN)5]3- appears after one CN- ligand is ejected. In sequence, H2O attacks [Fe(CN)5]3- to generate [Fe(CN)5H2O]3- with a time constant of approximately 20 ps. The 1T1g state and eaq- exhibit strong reducing power. The addition of UMP to the K4Fe(CN)6 solution decreased the yield of eaq- and reduced the lifetimes of the eaq- and 1T1g state. The obtained reaction rate constant of 1T1g state and UMP was 1.7×1014 M-1 s-1, and the eaq- attachment to UMP was ~8×109 M-1 s-1. Our results indicate that the reductive damage of K4Fe(CN)6 solution to nucleic acids under ultraviolet irradiation cannot be neglected.
In the pioneering work by R. A. Marcus, the solvation effect on electron transfer (ET) processes was investigated, giving rise to the celebrated nonadiabatic ET rate formula. In this work, on the basis of the thermodynamic solvation potentials analysis, we reexamine Marcus’ formula with respect to the Rice-Ramsperger-Kassel-Marcus (RRKM) theory. Interestingly, the obtained RRKM analogue, which recovers the original Marcus’ rate that is in a linear solvation scenario, is also applicable to the nonlinear solvation scenarios, where the multiple curve-crossing of solvation potentials exists. Parallelly, we revisit the corresponding Fermi’s golden rule results, with some critical comments against the RRKM analogue proposed in this work. For illustration, we consider the quadratic solvation scenarios, on the basis of physically well-supported descriptors.
Abstract The structure and stability of the compounds MRg+ and MRgF (Rg= Ar, Kr and Xe; M= Co, Rh and Ir ) were investigated using the B3LYP, MP2, MP4(SDQ) and CCSD(T) methods. We reported the geometry, vibrational frequencies and thermodynamics properties of these compounds. A series of theoretical methods on the basis of wavefunction analysis, including NBO, AIM, ELF and energy decomposition analysis, were performed to explore bonding nature of the M-Rg and Rg-F bonds. These bonds are mainly noncovalent, the metal weakly interacts with Rg in MRg+, but their interaction is much stronger in MRgF. The neutral molecule MRgF can be well described by the Lewis structure [MRg]+F-.
Rhodium-catalyzed cycloaddition reaction was calculated by density functional theory (DFT) M06-2X method to directly synthesize benzoxepine and coumarin derivatives. In this paper, we conducted a computational study of two competitive mechanisms in which the carbon atom of acetylene or carbon monoxide attacked and inserted from two different directions of the six-membered ring reactant to clarify the principle characteristics of this transformation. The calculation result reveal (1) the insertion process of alkyne or carbon monoxide is the key step of the reaction; (2) For the (5 + 2) cycloaddition reaction of acetylene, higher energy is required to break the Rh-O bond of the reactant, and the reaction tends to complete the insertion from the side of the Rh-C bond; (3) For the (5 + 1) cycloaddition of carbon monoxide, both reaction paths have lower activation free energy, and the two will generate a competition mechanism.
Laser flash photolysis was used to investigate the photoinduced reactions of excited triplet of bioquinone molecule duroquinone (DQ) with tryptophan (Trp) and tyrosine (Tyr) in Acetonitrile-Water (MeCN-H2O) and Ethylene Glycol-Water (EG-H2O) solutions. The reaction mechanisms were analyzed and the reaction rate constants were measured based on Stern-Volmer equation. The H-atom transfer reaction from Trp (Tyr) to 3DQ* is dominant after the formation of 3DQ* during the laser photolysis. For DQ and Trp in MeCN-H2O and EG-H2O solutions, 3DQ* captures H-atom from Trp to generate duroquinone neutral radical DQH•, carbon-centered tryptophan neutral radical Trp•/NH and nitrogen-centered tryptophan neutral radical Trp/N•. For DQ and Tyr in MeCN-H2O and EG-H2O solutions, 3DQ* captures H-atom from Tyr to generate duroquinone neutral radical DQH• and tyrosine neutral radical Tyr/O•. The H-atom transfer reaction rate constants of 3DQ* with Trp (Tyr) are on the level of 109 L•mol-1•s-1, nearly controlled by diffusion. The reaction rate constants of 3DQ* with Trp (Tyr) in MeCN/H2O solution is larger than that in EG/H2O solution, which agree with Stokes–Einstein relationship qualitatively
Consistency between density functional theory calculations and X-ray photoelectron spectroscopy measurements confirms our predications on the undercoordination-induced local bond relaxation and core level shift of alkali metal, which determine the surface, size and thermal properties of materials. Zone-resolved photoelectron spectroscopy analysis method and bond order-length-strength theory can be utilized to quantify the physical parameters regarding bonding identities and electronic property of metal surfaces, which allows for the study of the core-electron binding-energy shifts in alkali metals. By employing these methods and first principle calculation in this work, we can obtain the information of bond and atomic cohesive energy of under-coordinated atoms at the alkali metal surface. In addition, the effect of size and temperature towards the binding-energy in the surface region can be seen from the view point of Hamiltonian perturbation by atomic relaxation with atomic bonding.
Methyl 2-furoate (FAME2) is a renewable biofuel with the development of the new synthesis method of dimethyl furan-2,5-dicarboxylate. The potential energy surfaces (PES) of H-abstractions and OH-additions between FAME2 and hydroxyl radical (OH) were studied at the CCSD(T)/CBS//M062X/cc-pVTZ level. The following isomerization and decomposition reactions were also determined for the main radicals produced. The results show that the H-abstraction on the branch methyl group is the dominant channel and that the OH-addition reactions on the furan ring has a significant pressure dependency. The current rate coefficients provide important kinetic data for the further improving of the combustion mechanism of FAME2, which bring a trusty reference for further research on practical fuels.
A protein may exist as an ensemble of different conformations in solution, which cannot be represented by a single static structure. Molecular dynamics (MD) simulation has become a useful tool for sampling protein conformations in solution, but force fields and water models are important issues. This work presents a case study of the bacteriophage T4 lysozyme (T4L). We have found that MD simulations using a classic AMBER99SB force field and TIP4P water model cannot well describe hinge-bending domain motion of the wild-type T4L at the timescale of one micorsecond. Other combinations, such as a residue-specific force field called RSFF2+ and a dispersion-corrected water model TIP4P-D, are able to sample reasonable solution conformations of T4L, which are in good agreement with experimental data. This primary study may provide candidates of force fields and water models for further investigating conformational transition of T4L. 蛋白质在溶液中可能以不同构象的集合形式存在,不能用单一的静态结构来表示。分子动力学模拟已成为对蛋白质溶液构象进行采样的有用工具,但力场和水模型的准确性是一个关键问题。这项工作介绍了对噬菌体T4溶菌酶(T4L)的个例研究。我们发现,使用经典的力场和水模型,如AMBER99SB/TIP4P,1微秒时间尺度的分子动力学模拟不能很好地描述野生型T4L的结构域开合运动。但是其它力场和水模型组合,如残基特异性力场RSFF2+和离散校正的水模型TIP4P-D,能够对T4L溶液构象进行较为合理的采样,且与实验数据有良好的一致性。我们的初步研究可为进一步通过长时间分子动力学模拟及自由能计算研究T4L的构象转变机制提供力场和水模型的选择。
Photocatalytic water splitting to generate hydrogen gas is an ideal solution for environmental pollution and unsustainable energy issues. In the past few decades, many efforts have been made to increase the efficiency of hydrogen production. One of the most important ways is to achieve light absorption in the visible range to improve the conversion efficiency of solar energy into chemical energy, but it still presents great challenges. We here predicted a novel organic film, which can be obtained by polymerizing HTAP molecules, as an ideal material for photocatalytic water splitting. Basing on first-principles calculations and Born-Oppenheimer quantum molecular dynamic simulations, the metal-free two-dimensional nanomaterial has proven to be structurally stable and with a direct band gap of 2.12 eV, which satisfies the requirement of light absorption in the visible range. More importantly, the conduction bands and valence bands completely engulf the redox potentials of water, making the such film a promising photocatalyst for water splitting. This construction method through the topological periodicity of organic molecules provides a design scheme for the photocatalyst for water splitting.
Magnetic tunnel junction with a large tunneling magnetoresistance has attracted great attention due to its importance in the spintronics applications. By performing extensive density functional theory calculations combined with the nonequilibrium Green's function method, we examine the spin-dependent transport properties of a magnetic tunnel junction, in which a non-polar SrTiO3 barrier layer is sandwiched between two Heusler alloy Co2MnSi electrodes. Theoretical results clearly reveal that the near perfect spin-filtering effect appears in the parallel magnetization configuration (PC). The transmission coefficient in the PC at the Fermi level is several orders of magnitude larger than that of in the antiparallel magnetization configuration, resulting in a huge tunneling magnetoresistance (i.e. >10^6), which originates from the coherent spin-polarized tunneling, due to the half-metallic nature of Co2MnSi electrode and the significant spin-polarization of the interfacial Ti 3d orbital.
Binding and release of ligands are critical for biological functions of many proteins, and thus it is important to determine these highly dynamic processes. Although there are experimental techniques to determine structure of a protein-ligand complex, it only provides a static picture of the system. With the rapid increase of computing power and improved algorithms, molecule dynamics (MD) simulations have diverse of superiority in probing the binding and release process. However, it remains a great challenge to overcome the time and length scales when the system becomes large. This article presents an enhanced sampling tool for ligand binding and release, which is based on iterative multiple independent MD simulations guided by contacts formed between the ligand and the protein. From the simulation results on adenylate kinase (AdK), we observe the process of ligand binding and release while the conventional MD simulations at the same time scale cannot.
Ruthenium (Ru) serves as a promising catalyst for ammonia synthesis via the Haber-Bosch process, identification of the structure sensitivity to improve the activity of Ru is important but not fully explored yet. We present a calculation and micro-kinetic rate calculation on nitrogen activation, a crucial step in ammonia synthesis, over a variety of hexagonal close-packed (hcp) and face-center cubic (fcc) Ru facets. Hcp {21-30} facet exhibits the highest activity toward N<sub>2</sub> dissociation in hcp Ru, followed by the monatomic step sites. The other hcp Ru facets have N<sub>2</sub> dissociation rates at least three orders lower. Fcc {211} facet shows the best performance for N<sub>2</sub> activation in fcc Ru, followed by {311}, which indicates stepped surfaces make great contributions to the overall reactivity. Although hcp Ru {21-30} facet and monatomic step sites have lower or comparable activation barriers compared with fcc Ru {211} facet, fcc Ru is proposed to be more active than hcp Ru for N<sub>2</sub> conversion due to the exposure of the more favorable active sites over step surfaces in fcc Ru. Our work provides new insights into the crystal structure sensitivity of N<sub>2</sub> activation for mechanistic understanding and rational design of ammonia synthesis over Ru catalysts.
The potential energy landscape of the neutral Ni<sub>2</sub>(CO)<sub>5</sub> complex was re-examined. A new <i>C</i><sub>2V</sub> structure with double bridging carbonyls is found to compete with the previously proposed triply carbonyl-bridged <i>D</i><sub>3h</sub> isomer for the global minimum of Ni<sub>2</sub>(CO)<sub>5</sub>. Despite that the tri-bridged isomer possesses the more favored (18, 18) configuration as described in the textbook, where both metal centers satisfy the 18-electron rule, the neutral Ni<sub>2</sub>(CO)<sub>5</sub> complex prefers the di-bridged geometry with (18, 16) configuration. The isomerization energy decomposition analysis reveals that the structural preference is a consequence of the maximization of electrostatic and orbital interactions.
The interaction between Amyloid β (Aβ) peptide and acetylcholine receptor is the key for our understanding of how Aβ fragments block the ion channels within the synapses and thus induce Alzheimer's disease. Here, molecular docking and molecular dynamics (MD) simulations were performed for the structural dynamics of the docking complex consists of Aβ and α7-nAChR (α7 nicotinic acetylcholine receptor), and the inter-molecular interactions between ligand and receptor were revealed. The results show that Aβ25-35 bound to α7-nAChR through hydrogen bonds and complementary shape, and the Aβ25-35 fragments would easily assemble in the ion channel of α7-nAChR, then block the ion transfer process and induce neuronal apoptosis. The simulated amide-I band of Aβ25-35 in the complex is located at 1650.5 cm-1, indicating the backbone of Aβ25-35 tends to present random coil conformation, which is consistent with the result obtained from cluster analysis. Currently existed drugs were used as templates for virtual screening, eight new drugs were designed and semi-flexible docking was performed for their performance. The results show that, the interactions between new drugs and α7-nAChR are strong enough to inhibit the aggregation of Aβ25-35 fragments in the ion channel, and also be of great potential in the treatment of Alzheimer's disease.
The hierarchical stochastic Schrödinger equations (HSSE) are a kind of numerically exact wavefunction-based approaches suitable for the quantum dynamics simulations in a relatively large system coupled to a bosonic bath. Starting from the influence-functional description of open quantum systems, this review outlines the general theoretical framework of HSSEs and their concrete forms in different situations. The applicability and efficiency of HSSEs are exemplified by the simulations of ultrafast excitation energy transfer processes in large-scale systems.
The binding energy spectrum (BES) and electron momentum profiles (EMPs) of the inner orbitals of methyl iodide have been measured using an electron momentum spectrometer at the impact energy of 1200 eV plus binding energy. Two peaks in the BES, arising from the spin-orbit (SO) splitting, are observed and the corresponding EMPs are obtained. Relativistic density functional calculations are performed to elucidate the experimental EMPs of two SO splitting components, showing agreement with each other except for the intensity in low momentum region. The measured high intensity in the low momentum region can be further explained by the distorted wave calculation.
Stars with masses between 1 and 8 solar masses (M⊙) lose large amounts of material in the form of gas and dust in the late stages of stellar evolution, during their Asymptotic Giant Branch phase. Such stars supply up to 35% of the dust in the interstellar medium and thus contribute to the material out of which our solar system formed. In addition, the circumstellar envelopes of these stars are sites of complex, organic chemistry with over 80 molecules detected in them. We show that internal ultraviolet photons, either emitted by the star itself or from a close-in, orbiting companion, can significantly alter the chemistry that occurs in the envelopes particularly if the envelope is clumpy in nature. At least for the cases explored here, we find that the presence of a stellar companion, such as a white dwarf star, the high flux of UV photons destroys H2O in the inner regions of carbon-rich AGB stars to levels below those observed and produces species such as C+ deep in the envelope in contrast to the expectations of traditional descriptions of circumstellar chemistry.
Two-photon fluorescence dyes have shown promising applications in biomedical imaging. However, the substitution site effect on geometric structures and photophysical properties of fluorescence dyes is rarely illustrated in detail. In this work, a series of new lipid droplets detection dyes are designed and studied, molecular optical properties and non-radiative transitions are analyzed. The intramolecular weak interaction and electron-hole analysis reveal its inner mechanisms. All dyes are proved to possess excellent photophysical properties with high fluorescence quantum efficiency and large stokes shift as well as remarkable TPA cross section. Our work reasonably elucidates the experimental measurements and the effects of substitution site on two-photon absorption and excited states properties of Lipid droplets detection NAPBr dyes are highlighted, which could provide a theoretical perspective for designing efficient organic dyes for lipid droplets detection in biology and medicine fields.
We predict two novel group 14 element alloys Si2Ge and SiGe2 in P6222 phase in this work through first-principles calculations. The structures, stability, elastic anisotropy, electronic and thermodynamic properties of these two proposed alloys are investigated systematically. The proposed P6222-Si2Ge and -SiGe2 have a hexagonal symmetry structure, and the phonon dispersion spectra and elastic constants indicate that these two alloys are dynamically and mechanically stable at ambient pressure. The elastic anisotropy properties of P6222-Si2Ge and -SiGe2 are examined elaborately by illustrating the surface constructions of Young’s modulus, the contour surfaces of shear modulus, and the directional dependences of Poisson’s ratio, as well as discussing and comparing the differences with their corresponding group 14 element allotropes P6222-Si3 and -Ge3. Moreover, the Debye temperature and sound velocities are analyzed to study the thermodynamic properties of the proposed P6222-Si2Ge and -SiGe2.
The vacuum ultraviolet (VUV) photodissociation of OCS via the F 31Π Rydberg states was investigated in the range of 134–140 nm, by means of the time-sliced velocity map ion imaging technique. The images of S (1D2) products from the CO (X1Σ+) + S (1D2) dissociation channel were acquired at five photolysis wavelengths, corresponding to a series of symmetric stretching vibrational excitations in OCS (F 31Π, v1=0-4). The total translational energy distributions, vibrational populations and angular distributions of CO (X1Σ+, v) coproducts were derived. The analysis of experimental results suggests that the excited OCS molecules dissociate to CO (X1Σ+) and S (1D2) products via non-adiabatic couplings between the upper F 31Π states and the lower-lying states both in the C∞v and Cs symmetry. Furthermore, strong wavelength dependent behavior has been observed: the greatly distinct vibrational populations and angular distributions of CO (X1Σ+, v) products from the lower (v1=0-2) and higher (v1=3,4) vibrational states of the excited OCS (F 31Π, v1) demonstrate that very different mechanisms are involved in the dissociation processes. This study provides evidence for the possible contribution of vibronic coupling and the crucial role of vibronic coupling on the VUV photodissociation dynamics.
Herein we present a facile approach for the preparation of a novel hierarchically porous carbon, in which seaweeds serve as carbon source and KOH as activator. The fabricated KOH-activated seaweed carbon (K-SC) displays strong affinity towards tetracycline (TC) with maximum uptake quantity of 853.3 mg g–1, significantly higher than other TC adsorbents. The superior adsorption capacity ascribes to large specific surface area (2614 m2 g−1) and hierarchically porous structure of K-SC, along with strong π–π interactions between TC and K-SC. In addition, the as-prepared K-SC exhibits fast adsorption kinetics, capable of removing 99% of TC in 30 min. Meanwhile, the exhausted K-SC can be regenerated for four cycling adsorption without an obvious degradation in capacities. More importantly, pH and ionic strengths barely affect the adsorption performance of K-SC, implying electrostatic interactions hardly play any role in TC adsorption process. Furthermore, the K-SC packed fixed-bed column (0.1 g of adsorbents) can continually treat 2780 mL solution spiked with 5.0 mg g–1 TC before reaching the breakthrough point. All in all, the fabricated K-SC equips with high adsorption capacity, fast adsorption rate, glorious anti-interference capability and good reusability, which make it holding great feasibilities for treating TC contamination in real applications.
Three kinds of thermochromic materials (DC8, DC12, DC16) were synthesized by linking the rigid 1,4-bis[2-(4-pyridyl)ethenyl]-benzene (bpeb) with different lengths of alkyl chains. They exhibit remarkable fluorescent color changes under the irradiation of 365 nm light with elevating temperature, which is supposed to be caused by the transition between the crystal state and the amorphous state. Interestingly, the DC16 solid also has a photochromic character. It should be noticed that the phase transition temperatures of three materials measured by differential scanning calorimetry are higher than those of the fluorescence color changes during the heating process. Thus, the allochroic effect is attributed to the synergistic effect of both heating and photo-inducement (365 nm). Ethanol can turn the heated powder into the initial crystal again which indicates that their thermochromic behavior is reversible and makes the fluorescence recover. This study is of significance for understanding the structure-property relation and guides thermochromic molecular design.
Theoretical study was carried out with OX2 (X = Halogen) molecules and calculation results showed that delocalized π36 bonds exists in their electronic structures and O atoms adopt the sp2 type of hybridization, which violated the VSEPR theory’s prediction of sp3 type. Delocalization stabilization energy (DSE) was proposed to measure delocalized π36 bond’s contribution to energy decrease and proved that it brings extra-stability to the molecule. According to our analyses, these phenomena can be summarized as a kind of coordinating effect.
The boom in ultra-thin electronic devices and the growing need for humanization greatly facilitated the development of wearable flexible micro-devices. But the technology to deposit electrode material on flexible substrate is still in its infancy. Herein, the flexible symmetric micro-supercapacitors made of carbon nanotubes (CNTs) on commercial printing paper as electrode materials were fabricated by combining tetrahedral preparator auxiliary coating method and laser-cutting interdigital configuration technique on a large scale. The electrochemical performance of obtained micro-supercapacitors can be controlled and tuned by simple choosing the different model of tetrahedral preparatory to obtain CNTs film of different thickness. As expected, the micro-supercapacitor based on CNTs film can deliver an areal capacitance of up to 4.56 mF cm-2 at current of 0.02 mA. Even if, micro-supercapacitor undergo continuous 10000 cycles, the performance of device can still remain nearly 100%. The as-demonstrated tetrahedral preparator auxiliary coating method and laser-cutting interdigital configuration technique provide new perspective for preparing microelectronics in an economical way. The paper electrode appended by CNTs achieves steerable areal capacitance, showing broad application prospect in fabricating asymmetric micro-supercapacitor with flexible planar configurations in the future.
Our experimental progresses on the reaction dynamics of dissociative electron attachment (DEA) to carbon dioxide (CO2) are summarized in this review. First, we introduce some fundamentals about the DEA dynamics and provide an epitome about the DEAs to CO2. Second, our development on the experimental techniques is described, in particular, on the high-resolution velocity map imaging apparatus in which we put a lot of efforts during the past two years. Third, our findings about the DEA dynamics of CO2 are surveyed and briefly compared with the others’ work. At last, we give a perspective about the applications of the DEA studies and highlight the inspirations in the production of molecular oxygen on Mars and the catalytic transformations of CO2.
Si (111) electrode has been widely used in electrochemical and photoelectrochemical studies. The potential dependent measurements of the second harmonic generation (SHG) were performed to study Si (111) electrolyte interface. At different azimuthal angles of the Si (111) and under different polarization combinations, the curve of the intensity of SHG with extern potential have different form of line or parabola. A quantitative analysis showed that this differences of the potential-dependence can be explained by the isotropic and anisotropic contribution of the Si (111) electrode. The change in isotropic and anisotropic contribution of the Si (111) electrode may be attributed to the increase in doping concentration of Si (111) electrodes.
Reactions of gas-phase species with small molecules are being actively studied to understand the elementary steps and mechanistic details of related condensed-phase processes. Activation of the very inert N≡N triple bond of dinitrogen molecule by isolated gas-phase species has attracted considerable interest in the past few decades. Apart from molecular adsorption and dissociative adsorption, interesting processes such as C–N coupling and degenerate ligand exchange were discovered. The present review article focuses on the recent progress on adsorption, activation, and functionalization of N2 by gas-phase species (particularly metal cluster ions) using mass spectrometry, infrared photo-dissociation spectroscopy, anion photoelectron spectroscopy, and quantum chemical calculations including density functional theory and high-level ab-initio calculations. Recent advances including characterization of adsorption products, dependence of clusters' reactivity on their sizes and structures, and mechanisms of N≡N weakening and splitting have been emphasized and prospects have been discussed.
Defect-mediated processes in two-dimensional transition metal dichalcogenides have a significant influence on their carrier dynamics and transport properties, however, the detailed mechanisms remain poorly understood. Here, we present a comprehensive ultrafast study on defect-mediated carrier dynamics in ion exchange prepared few-layer MoS<sub>2</sub> by femtosecond time-resolved Vis-NIR-MIR spectroscopy. The broadband photobleaching feature observed in the near-infrared transient spectrum discloses that the mid-gap defect states are widely distributed in few-layer MoS<sub>2</sub> nanosheets. The processes of fast trapping of carriers by defect states and the following nonradiative recombination of trapped carriers are clearly revealed, demonstrating the mid-gap defect states play a significant role in the photoinduced carrier dynamics. The positive to negative crossover of the signal observed in the mid-infrared transient spectrum further uncovers some occupied shallow defect states distributed at less than 0.24 eV below the conduction band minimum. These defect states can act as effective carrier trap centers to assist the nonradiative recombination of photo-induced carriers in few-layer MoS<sub>2</sub> on the picosecond time scale.
Silicon bulk etching is an important part of micro-electro-mechanical system (MEMS) technology. In this work, a novel etching method is proposed based on the vapor from TMAH solution heated up to boiling point. The monocrystalline silicon wafer is positioned over the solution surface and can be anisotropically etched by the produced vapor. This etching method does not rely on the expensive vacuum equipment used in dry etching. Meanwhile, it presents several advantages like low roughness, high etching rate and high uniformity compared with the conventional wet etching methods. The etching rate and roughness can reach 2.13 μm/min and 1.02 nm, respectively. To our knowledge, this rate is the highest record for the wet etching based on TMAH. Furthermore, the diaphragm structure and Al-based pattern on the non-etched side of wafer can maintain intact without any damage during the back-cavity fabrication. Finally, an etching mechanism has been proposed to illustrate the observed experimental phenomenon. It is suggested that there is a water thin film on the etched surface during the solution evaporation. It is in this water layer that the ionization and etching reaction of TMAH proceed, facilitating the desorption of hydrogen bubble and the enhancement of molecular exchange rate.
Methyl vinyl ketone oxide (MVCI), an unsaturated four-carbon Criegee intermediate produced from the ozonolysis of isoprene has been recognized to play a key role in determining the tropospheric OH concentration. It exists in four configurations (anti_anti, anti_syn, syn_anti and syn_syn) due to two different substituents of saturated methyl and unsaturated vinyl groups. In this study, we have carried out the electronic structure calculation at the multi-configurational CASSCF and multi-state MS-CASPT2 levels, as well as the trajectory surface-hopping (TSH) nonadiabatic dynamics simulation at the CASSCF level to reveal the different fates of syn/anti configurations in photochemical process. Our results show that the dominant channel for the S1-state decay is a ring closure, isomerization to dioxirane, during which, the syn(C-O) configurations with an intramolecular hydrogen bond show slower nonadiabatic photoisomerization. More importantly, it has been found for the first time in photochemistry of Criegee intermediate that the cooperation of two heavy groups (methyl and vinyl) leads to an evident pyramidalization of C3 atom in MVCI, which then results in two structurally-independent minimal-energy crossing points (CIs) towards the syn(C-O) and anti(C-O) sides, respectively. The preference of surface hopping for a certain CI is responsible for the different dynamics of each configuration.
Recent experiments report the rotation of FA (FA= HC[NH2]2+) cations significantly influence the excited-state lifetime of FAPbI3. However, the underlying mechanism remains unclear. Using ab initio nonadiabatic (NA) molecular dynamics combined with time-domain density functional simulations, we have demonstrated that eeorientation of partial FA cations significantly inhibits nonradiative electron-hole recombination with respect to the pristine FAPbI3 due to the decreased NA coupling by localizing electron and hole in different positions and the suppressed atomic motions. Slow nuclear motions simultaneously increase the decoherence time but which is overcomed by the reduced NA coupling, extending electron-hole recombination time scales to several nanoseconds and being about 3.9 times longer than that in pristine FAPbI3, which occurs within sub-nanosecond and agrees with experiment. Our study established the mechanism for the experimentally reported prolonged excited-state lifetime, providing rational strategy for design of high performance of perovskite solar cells and optoelectronic devices.
Understanding the influence of nanoparticles on the formation of protein amyloid fibrillation is crucial to extend their application in related biological diagnosis and nanomedicines. In this work, Raman spectroscopy was used to probe the amyloid fibrillation of hen egg-white lysozyme (HEWL) in the presence of silver nanoparticles (AgNPs) at different concentrations, combined with atomic force microscopy (AFM) and Thioflavin T (ThT) fluorescence assays. Four representative Raman indicators were utilized to monitor transformation of the protein tertiary and secondary structures at the molecular level: the Trp doublet bands at 1340 and 1360 cm-1, the disulfide stretching vibrational peak at 507 cm-1, the N-Cα-C stretching vibration at 933 cm-1, and the amide I band. All experimental results confirmed the concentration-dependent influence of AgNPs on the HEWL amyloid fibrillation kinetics. In the presence of AgNPs at low concentration (17 µg/ml), electrostatic interaction of the nanoparticles stabilizes disulfide bonds, and protect the Trp residues from exposure to hydrophilic environment, thus leading to formation of amorphous aggregates rather than fibrils. However, with the action of AgNPs at high concentration (1700 µg/ml), the native disulfide bonds of HEWL are broken to form Ag-S bonds owing to the competition of electrostatic interaction from a great deal of nanoparticles. According to providing functional surfaces for protein to interact with, AgNPs play a bridge role on direct transformation from α-helices to organized β-sheets. The present investigation sheds light on the controversial effects of AgNPs on the kinetics of HEWL amyloid fibrillation.
We report a measurement of electron momentum distributions of valence orbitals of cyclopentene employing symmetric noncoplanar (e, 2e) kinematics at impact energies of 1200 and 1600 eV plus the binding energy. Experimental momentum profiles for individual ionization bands are obtained and compared with theoretical calculations considering nuclear dynamics by harmonic analytical quantum mechanical and thermal sampling molecular dynamics approaches. The results demonstrate that molecular vibrational motions including ring-puckering of this flexible cyclic molecule have obvious influences on the electron momentum profiles for the outer valence orbitals, especially in the low momentum region. For π*-like molecular orbitals 3a'' and 2a''+3a' , the impact-energy dependence of the experimental momentum profiles indicates a distorted wave effect.
Though poly(3,4-ethylenedioxythiophene):poly(styrenesulfonate) (PEDOT:PSS) has been widely adopted as hole transport material (HTM) in flexible perovskite solar cells (PSCs), arising from high optical transparency, good mechanical flexibility, and high thermal stability, its acidity and hygroscopicity would inevitably hamper the long-term stability of the PSCs and its energy level does not match well with that of perovskite materials which would lead to a relatively low open-circuit voltage (VOC). In this investigation, p-type delafossite CuCrO2 nanoparticles synthesized through hydrothermal method have been employed as an alternative HTM for triple cation perovskite [(FAPbI3)0.87(MAPbBr3)0.13]0.92[CsPbI3]0.08 (possessing better photovoltaic performance and stability than conventional CH3NH3PbI3) based inverted architecture PSCs. The average VOC of PSCs has increased from 908 mV of the devices with PEDOT:PSS HTM to 1020 mV of the devices with CuCrO2 HTM. Ultraviolet photoemission spectroscopy measurement demonstrates the energy band alignment between CuCrO2 and perovskite is better than that between PEDOT:PSS and perovskite, the electrochemical impedance spectroscopy indicates CuCrO2 based PSCs exhibit larger recombination resistance and longer charge carrier lifetime, which contribute to the high VOC of CuCrO2 HTM based PSCs.
Fast and accurate quantitative detection of $^{14}$CO$_2$ has important applications in many fields. The optical detection method based on the sensitive cavity ring-down spectroscopy technology has great potential. But currently it has difficulties of insufficient sensitivity and susceptibility to absorption of other isotopes/impurity molecules. We propose a stepped double-resonance spectroscopy method to excite $^{14}$CO$_2$ molecules to an intermediate vibrationally excited state, and use cavity ring-down spectroscopy to probe them. The two-photon process significantly improves the selectivity of detection. We derived the quantitative measurement capability of double-resonance absorption spectroscopy. The simulation results show that the double-resonance spectroscopy measurement is Doppler-free, thereby reducing the effect of other molecular absorption. It is expected that this method can achieve high-selectivity detection of $^{14}$CO$_2$ at the sub-ppt level.
A novel electrochemical non-enzymatic glucose sensor based on three-dimensional Au/MXene nanocomposites was developed. MXenes were prepared using the mild etched method, and the porous foam of Au nanoparticles was combined with the MXene by means of in situ synthesis. By controlling the mass of MXene in the synthesis process, porous foam with Au nanoparticles was obtained. The three-dimensional foam structure of nanoparticles was confirmed by scanning electron microscopy (SEM). Cyclic voltammetry (CV) and electrochemical impedance spectroscopy (EIS) were used to study the electrochemical performance of the Au/MXene nanocomposites. The Au/MXene nanocomposites acted as a fast redox probe for non-enzymatic glucose oxidation and showed good performance, including a high sensitivity of 22.45 μA mM−1 cm−1 and a wide linear range of 1–12 mM. Studies have shown that MXene as a catalyst-supported material is beneficial to enhance the conductivity of electrons and increase the loading rate of the catalyst materials. The foam structure with Au nanoparticles can provide a larger surface area, increase the contact area with the molecule in the catalytic reaction, and enhance the electrochemical reaction signal. In summary, this study showed that Au/MXene nanoparticles have the potential to be used in non-enzymatic glucose sensors.
N-乙基吡咯是吡咯分子的一个乙基取代衍生物,它的激发态衰变动力学目前为止很少被研究。在本文中,我们利用飞秒时间分辨光电子成像的实验方法研究了N-乙基吡咯分子S1态的衰变动力学。实验上采用了241.9和237.7 nm的泵浦激发波长。在241.9 nm激发下,得到了5.0±0.7 ps, 66.4±15.6 ps 和1.3±0.1 ns三个寿命常数。对于237.7 nm, 得到了2.1±0.1 ps 和13.1±1.2 ps两个寿命常数。我们将所有寿命常数都归属为S1态的振动态。并对不同S1振动态的弛豫机理进行了讨论。 N-ethylpyrrole is one of ethyl-substituted derivatives of pyrrole and its excited-state decay dynamics has never been explored. In this paper, we investigate ultrafast decay dynamics of N-ethylpyrrole excited to the S1 electronic state using a femtosecond time-resolved photoelectron imaging method. Two pump wavelengths of 241.9 and 237.7 nm are employed. At 241.9 nm, three time constants, 5.0±0.7 ps, 66.4±15.6 ps and 1.3±0.1 ns, were derived. For 237.7 nm, two time constants of 2.1±0.1 ps and 13.1±1.2 ps were derived. We assign all these time constants to be associated with different vibrational states in the S1 state. The possible decay mechanisms of different S1 vibrational states are briefly discussed.
The burgeoning two-dimensional (2D) layered materials provide a powerful strategy to realize efficient light-emitting devices. Among them, Gallium telluride (GaTe) nanoflakes, emerging strong photoluminescence (PL) emission from multilayer to bulk crystal, relax the stringent fabrication requirements of nanodevices. However, detailed knowledge on the optical properties of GaTe varied as layer thickness is still missing. Here we perform thickness-dependent PL and Raman spectra, as well as temperature-dependent PL spectra of GaTe nanoflakes. Spectral analysis reveals a spectroscopic signature for the coexistence of both the monoclinic and hexagonal phases in GaTe nanoflakes. To understand the experimental results, we propose a crystal structure where the hexagonal phase is on the top and bottom of nanoflakes while the monoclinic phase is in the middle of the nanoflakes. On the basis of temperature-dependent PL spectra, the optical gap of the hexagonal phase is determined to 1.849 eV, which can only survive under a temperature higher than 200 K with the increasing phonon population. Furthermore, the exciton-phonon interaction of the hexagonal phase is estimated to be 1.24 meV/K. Our results prove the coexistence of dual crystalline phases in multilayer GaTe nanoflakes, which may provoke further exploration of phase transformation in GaTe materials, as well as new applications in 2D light-emitting diodes and heterostructure-based optoelectronics.
2021, 34(2): i-ii.
[Abstract](14) [PDF 28KB](4)
2021, 34(2): iii-iv.
[Abstract](16) [PDF 504KB](7)
Solid oxide fuel cells (SOFCs) are regarded to be a key clean energy system to convert chemical energy (e.g. H2 and O2) into electrical energy with high efficiency, low carbon footprint, and fuel flexibility. The electrolyte, typically doped zirconia, is the "state of the heart" of the fuel cell technologies, determining the performance and the operating temperature of the overall cells. Yttria stabilized zirconia (YSZ) have been widely used in SOFC due to its excellent oxide ion conductivity at high temperature. The composition and temperature dependence of the conductivity has been hotly studied in experiment and, more recently, by theoretical simulations. The characterization of the atomic structure for the mixed oxide system with different compositions is the key for elucidating the conductivity behavior, which, however, is of great challenge to both experiment and theory. This review presents recent theoretical progress on the structure and conductivity of YSZ electrolyte. We compare different theoretical methods and their results, outlining the merits and deficiencies of the methods. We highlight the recent results achieved by using stochastic surface walking global optimization with global neural network potential (SSW-NN) method, which appear to agree with available experimental data. The advent of machine-learning atomic simulation provides an affordable, efficient and accurate way to understand the complex material phenomena as encountered in solid electrolyte. The future research directions for design better electrolytes are also discussed.
Assembling of a few particles into a cluster commonly occurs in many systems. However, it is still challenging to precisely control particle assembling, due to the various amorphous structures induced by thermal fluctuations during cluster formation. Although these structures may have very different degrees of aggregation, a quantitative method is lacking to describe them, and how these structures evolve remains unclear. Therefore a significant step towards precise control of particle self-assembly is to describe and analyze various aggregation structures during cluster formation quantitatively. In this work, we are motivated to propose a method to directly count and quantitatively compare different aggregated structures. We also present several case studies to evaluate how the aggregated structures during cluster formation are affected by external controlling factors, e.g., different interaction ranges, interaction strengths, or anisotropy of attraction.
The ring-polymer molecular dynamics (RPMD) was used to calculate the thermal rate coefficients of the multi-channel roaming reaction H+MgH→Mg+H2. Two reaction channels, tight and roaming, are explicitly considered. This is a pioneering attempt of exerting RPMD method to multi-channel reactions. With the help of a newly developed optimization-interpolation protocol for preparing the initial structures and adaptive protocol for choosing the force constants, we have successfully obtained the thermal rate coefficients. The results are consistent with those from other theoretical methods, such as variational transition state theory and quantum dynamics. Especially, RPMD results exhibit negative temperature dependence, which is similar to the results from variational transition state theory but different from the ones from ground state quantum dynamics calculations.
Diffusion of tracer particles in active bath has attracted extensive attention in recent years. So far, most studies have considered isotropic spherical tracer particles, while the diffusion of anisotropic particles has rarely been involved. Here we investigate the diffusion dynamics of a rigid rod tracer in a bath of active particles by using Langevin dynamics simulations in a two-dimensional space. Particular attention is paid to how the translation (rotation) diffusion coefficient $ D_{ \rm{T}} $ ($ D_{ \rm{R}} $) change with the length of rod $ L $ and active strength $ F_{ \rm{a}} $. In all cases, we find that rod exhibits superdiffusion behavior in a short time scale and returns to normal diffusion in the long time limit. Both $ D_{ \rm{T}} $ and $ D_{ \rm{R}} $ increase with $ F_{ \rm{a}} $, but interestingly, a nonmonotonic dependence of $ D_{ \rm{T}} $ ($ D_{ \rm{R}} $) on the rod length has been observed. We have also studied the translation-rotation coupling of rod, and interestingly, a negative translation-rotation coupling is observed, indicating that rod diffuses more slowly in the parallel direction compared to that in the perpendicular direction, a counterintuitive phenomenon that would not exist in an equilibrium counterpart system. Moreover, this anomalous (diffusion) behavior is reentrant with the increase of $ F_{ \rm{a}} $, suggesting two competitive roles played by the active feature of bath particles.
In order to study the effect of different modification methods on polysilsesquioxane (POSS) modified cellulose, a molecular dynamics method was used to establish a pure cellulose model and a series of modified models modified by polysilsesquioxane in different ways. And their thermodynamic properties were calculated. The results showed that the performance of cellulose models was better than that of unmodified model, and the modified effect was the best when two cellulose chains were grafted onto polysilsesquioxane by chemical bond (M2 model). Compared with pure cellulose model, the cohesive energy density and solubility parameters of M2 model are increased by 9%, and the values of tensile modulus, bulk modulus, shear modulus and Cauchy pressure increased by 38.6%, 29.5%, 41.1% and 29.5%, respectively. In addition, the free volume fraction and mean square displacement of each model were calculated and analyzed in this work. Compared with the pure cellulose model, the molecular chain entanglement of cellulose was increased due to the existence of the chemical bonds in the M2 model, which made the cellulose molecular chains occupy more free volume, so that the system had a smaller free volume fraction, inhibited the chain movement of cellulose chains, and thus improved the thermal stability of cellulose.
The lattice parameters, bulk modulus, first derivative of the bulk modulus, electronic band structures, phonon dispersion curves and phonon density of states calculations for Li2AlGa and Li2AlIn Heusler alloys are performed and compared in this study using density functional theory within the generalized gradient approximation. Computed lattice parameters display a good agreement with the literature. Obtained electronic band structures of both Heusler alloys show that they are in semi-metallic structure. Phonon dispersion curves and the phonon density of states graphs are also obtained in order to study the lattice dynamics of these Heusler alloys. It is noticed that Li2AlGa and Li2AlIn Heusler alloys are dynamically stable in the ground state.
The kinetics for hydrogen (H) adsorption on Ir(111) electrode has been studied in both HClO$ _4 $ and H$ _2 $SO$ _4 $ solutions by impedance spectroscopy. In HClO$ _4 $, the adsorption rate for H adsorption on Ir(111) increases from 1.74$ \times $10$ ^{-8} $ mol$ \cdot $cm$ ^{-2} $$ \cdot $s$ ^{-1} $ to 3.47$ \times $10$ ^{-7} $ mol$ \cdot $cm$ ^{-2} $$ \cdot $s$ ^{-1} $ with the decrease of the applied potential from 0.2 V to 0.1 V (vs. RHE), which is ca. one to two orders of magnitude slower than that on Pt(111) under otherwise identical condition. This is explained by the stronger binding of water to Ir(111), which needs a higher barrier to reorient during the under potential deposition of H from hydronium within the hydrogen bonded water network. In H$ _2 $SO$ _4 $, the adsorption potential is ca. 200 mV negatively shifted, accompanied by a decrease of adsorption rate by up to one order of magnitude, which is explained by the hindrance of the strongly adsorbed sulfate/bisulfate on Ir(111). Our results demonstrate that under electrochemical environment, H adsorption is strongly affected by the accompanying displacement and reorientation of water molecules that initially stay close to the electrode surface.
The photophysical and photochemical behaviors of thioxanthen-9-one (TX) in different solvents have been studied using nanosecond transient absorption spectroscopy. A unique absorption of the triplet state $ ^3 $TX$ ^* $ is observed, which involves two components, $ ^3 $n$ \pi $$ ^* $ and $ ^3 $$ \pi\pi^* $ states. The $ ^3 $$ \pi\pi^* $ component contributes more to the $ ^3 $TX$ ^* $ when increasing the solvent polarity. The self-quenching rate constant $ k_{ \rm{sq}} $ of $ ^3 $TX$ ^* $ is decreased in the order of CH$ _3 $CN, CH$ _3 $CN/CH$ _3 $OH (1:1), and CH$ _3 $CN/H$ _2 $O (1:1), which might be caused by the exciplex formed from hydrogen bond interaction. In the presence of diphenylamine (DPA), the quenching of $ ^3 $TX$ ^* $ happens efficiently via electron transfer, producing the TX$ ^\cdot $$ ^- $ anion and DPA$ ^{\cdot} $$ ^+ $ cation radicals. Because of insignificant solvent effects on the electron transfer, the electron affinity of the $ ^3 $n$ \pi $$ ^* $ state is proved to be approximately equal to that of the $ ^3 $$ \pi\pi^* $ state. However, a solvent dependence is found in the dynamic decay of TX$^{{ \cdot ^ - }}$ anion radical. In the strongly acid aqueous acetonitrile (pH = 3.0), a dynamic equilibrium between protonated and unprotonated TX is definitely observed. Once photolysis, $ ^3 $TXH$ ^{+*} $ is produced, which contributes to the new band at 520 nm.
Au@Au@Ag double shell nanoparticles were fabricated and characterized using TEM, STEM-mapping and UV-Vis methods. Using crystal violet as Raman probe, the surface-enhanced Raman scattering (SERS) activity of the as-prepared Au@Au@Ag nanoparticles was studied by comparing to Au, Au@Ag and Au@Au core-shell nanoparticles which were prepared by the same methods. Moreover, it can be found that the SERS activity was enhanced obviously by introduction of NaCl and the concentrations of NaCl played a key role in SERS detection. With an appropriate concentration of NaCl, the limit of detection as low as 10$ ^{-10} $ mol/L crystal violet can be achieved. The possible enhanced mechanism was also discussed. Furthermore, with simple sample pretreatment, the detection limit of 5 μg/g Rhodamine B (RhB) in chili powders can be achieved. The results highlight the potential utility of Au@Au@Ag for detection of illegal food additives with low concentrations.
The issues of low crystallinity and slow crystallization rate of poly(lactic acid) (PLA) have been widely addressed. In this work, we find that doping PLA with Zn(Ⅱ) ions can speed up the process of crystallization of PLA. Three kinds of Zn(Ⅱ) salts (ZnCl$ _2 $, ZnSt and ZnOAc) were tested in comparison with some other ions such as Mg(Ⅱ) and Ca(Ⅱ). The increased crystallinity and crystallization rate of PLA doping with Zn(Ⅱ) are reflected in FT-IR and variable temperature Raman spectroscopy. The crystallinity is further confirmed or measured with differential scanning calorimetry and X-ray diffraction. The crystallinity rate of the PLA/ZnSt-0.4 wt% material can reach 22.46% and the crystallinity rate of the PLA/ZnOAc-0.4 wt% material can reach 24.83%, as measured with differential scanning calorimetry.
g-C3N4 coupled with high specific area TiO2 (HSA-TiO2) composite was prepared by a simple solvothermal method, which was easy to operate with low energy consumption. Degradation of methyl orange test results showed that HSA-TiO2 effectively improved the photocatalytic activity effectively. Photoelectrochemical test results indicated that the separation of photo-generated carriers and the charge carrier migration speed of TiO2 were improved after combination with g-C3N4. g-C3N4/HSA-TiO2 showed strong photocatalytic ability. The degree of degradation of methyl orange by 6%-g-C3N4/HSA-TiO2 could reach up to 92.44%. Furthermore, it revealed good cycle performance. The photocatalytic mechanism of g-C3N4/HSA-TiO2 was proposed.
Four organic small-molecule hole transport materials ( D41 , D42 , D43 and D44 ) of tetraarylpyrrolo[3, 2-b]pyrroles were prepared. They can be used without doping in the manufacture of the inverted planar perovskite solar cells. Tetraarylpyrrolo[3, 2-b]pyrroles are accessible for one-pot synthesis. D42 , D43 and D44 possess acceptor-$ \pi $-donor-$ \pi $-acceptor structure, on which the aryl bearing substitutes of cyan, fluorine and trifluoromethyl, respectively. Instead, the aryl moiety of D41 is in presence of methyl with a donor-$ \pi $-donor-$ \pi $-donor structure. The different substitutes significantly affected their molecular surface charge distribution and thin-film morphology, attributing to the electron-rich properties of fused pyrrole ring. The size of perovskite crystalline growth particles is affected by different molecular structures, and the electron-withdrawing cyan group of D42 is most conducive to the formation of large perovskite grains. The D42 fabricated devices with power conversion efficiency of 17.3% and retained 55% of the initial photoelectric conversion efficiency after 22 days in dark condition. The pyrrolo[3, 2-b]pyrrole is efficient electron-donating moiety for hole transporting materials to form good substrate in producing perovskite thin film.
To address the limitations of the separate fluoride removal or detection in the existing materials, herein, amino-decorated metal organic frameworks NH$ _2 $-MIL-53(Al) have been succinctly fabricated by a sol-hydrothermal method for simultaneous removal and determination of fluoride. As a consequence, the proposed NH$ _2 $-MIL-53(Al) features high uptake capacity (202.5 mg/g) as well as fast adsorption rate, being capable of treating 5 ppm of fluoride solution to below the permitted threshold in drinking water within 15 min. Specifically, the specific binding between fluoride and NH$ _2 $-MIL-53(Al) results in the release of fluorescent ligand NH$ _2 $-BDC, conducive to the determination of fluoride via a concentration-dependent fluorescence enhancement effect. As expected, the resulting NH$ _2 $-MIL-53(Al) sensor exhibits selective and sensitive detection (with the detection limit of 0.31 $ \mu $mol/L) toward fluoride accompanied with a wide response interval (0.5-100 $ \mu $mol/L). More importantly, the developed sensor can be utilized for fluoride detection in practical water systems with satisfying recoveries from 89.6% to 116.1%, confirming its feasibility in monitoring the practical fluoride-contaminated waters.
Residues of tetracycline antibiotics (TCs) in environments may be harmful to human. Due to their high polarities, it is extremely challenging to efficiently enrich TCs with low concentrations in natural waters for analysis. In this work, a magnetic metal-organic framework Fe$ _3 $O$ _4 $@[Cu$ _3 $(btc)$ _2 $] was synthesized and applied as a dispersive micro-solid phase extraction adsorbent for TCs enrichment. Effects of dispersive micro-solid phase extraction conditions including extraction time, solution pH, and elution solvent on the extraction efficiencies of TCs were investigated. Results show that TCs could be enriched efficiently by Fe$ _3 $O$ _4 $@[Cu$ _3 $(btc)$ _2 $], and electrostatic interaction between TCs and Fe$ _3 $O$ _4 $@[Cu$ _3 $(btc)$ _2 $] dominated this process. Combined with liquid chromatography-tandem mass spectrometry, four TCs residues (oxytetracycline, tetracycline, chlortetracycline, and doxycycline) in natural waters were determined. The detection limits (LOD, $ S/N $ = 3) of the four antibiotics were 0.01-0.02 $ \mu $g/L, and the limits of quantitation (LOQ, $ S/N = $10) were 0.04-0.07 $ \mu $g/L. The recoveries obtained from river water and aquaculture water spiked with three TCs concentration levels ranged from 70.3% to 96.5% with relative standard deviations of 3.8%-12.8%. Results indicate that the magnetic metal-organic framework based dispersive micro-solid phase extraction is simple, rapid and high-loading for antibiotics enrichment from water, which further expand the practical application of metal-organic frameworks in sample pretreatment for environmental pollutant analysis. |
e367b5bb5fc622f3 | Informatics & Technology
How did the data propagate? Automated Optical Path Monitoring
With the development of 5G, our world might seem more wireless than ever. However, lurking behind this and facilitating all high-speed data transfer are kilometres and kilometres of optical fibres. The backbone network for all communication, wireless or otherwise, is this sprawling network of fibres. For reliable communications and internet access, the world’s expanse of fibre optic cables must work well with faults being diagnosed quickly and easily. Takeo Sasai at NTT Laboratories has been finding novel ways to employ artificial intelligence for easy-to-use fibre monitoring and diagnosis.
Every second, 24,000 Gigabytes of data is uploaded to the internet. This is equivalent to nearly 50 completely full standard laptop hard drives. A significant portion of this data comes from social media sites such as Facebook and Twitter, and the technological effort to keep these volumes of data moving between data centres and end users is significant.
While many end users now make use of wireless technologies such as 4G to connect to the internet and stream information, most of the underlying infrastructure is made of a network of interconnected fibre optic cables. These cables are everywhere, with large bundles buried under the ocean floor and others running overhead in our cities.
AI learns propagation equation in an optical fibre, and enables automated testing of optical networks.
The advantages of wired communication are that large amounts of data can be transferred faster and are inherently more secure than wireless communications. The transmission capabilities of optical fibres continue to improve, but developing, maintaining, and expanding this network of cables is a costly task. Many optical fibres are in difficult-to-access and remote locations, and when they are installed, their performance must be checked carefully.
Takeo Sasai at NTT Laboratories is an expert on modelling the inner secrets of what happens in the great lengths of fibre optic cables that cover our world. He has been working on ways to automatise the diagnosis of problems with fibre optic cables, such as information and data loss, with a bit of help from artificial intelligence.
Information as light
Fibre optic cables carry information in the form of light. The information to be sent is encoded as little pulses of light at one end of the cable and can be decoded back to electrical signals at the other.
There are different ways of constructing fibre optic cables that are optimised for either short or distance data transfer. Still, there are two properties of the cable that are crucial in both cases – the loss and the dispersion. Loss can occur for several reasons in a cable – from attenuation of the strength of the signal due to misalignment or damage or even breakages in the fibre. Dispersion is a ‘spreading’ of the signal over time, which can happen due to different colours of the light signal travelling at different speeds, or different modes propagating in the fibre at different rates.
Physical damage and misalignment of fibres can be easily done, and rapid diagnosis is essential.
While a small amount of loss can be tolerated, if the loss rate becomes too high the fibre will no longer reliably transfer information. When fibres are installed – a complex and expensive process in itself – the loss and dispersion characteristics are usually tested to make sure signals can pass through the fibres without any problems. The challenge is that this testing process requires expert engineers and specialist equipment, such as dispersion analysers and an optical time domain reflectometer (OTDR). The test equipment injects a series of sample light pulses into the fibre to mimic a signal and measures how long it takes for them to return and their properties.
Sasai believes there is an easier way to check the quality of fibres that would also allow for constant monitoring of fibre conditions. At present, if there is a suspected problem with an optical fibre, engineers will have to travel on-site with test equipment like an OTDR to carry out measurements for diagnosis. This is time consuming and expensive but potentially a thing of the past with Sasai’s automated methods.
Data insights
As optical fibres are constantly transmitting information, Sasai’s new approach makes use of this data as a diagnostic tool in itself. Using a machine learning algorithm, Sasai can identify particular patterns in the received datasets that reflect the loss and dispersion the signal has undergone on its journey through the fibre.
The reason this analysis method works is because we have an equation that can be used to understand and describe how light propagates through an optical fibre, known as the nonlinear Schrödinger equation (NLSE). The mathematical structure of the NLSE is quite similar to a neural network – a computing system that is a series of algorithms, or neurons, connected in different ways by a ‘net’ to represent the relationships between them.
Fig 1: NLSE has essentially the same structure as neural networks, consisting of linear and nonlinear functions.
Neutral networks are supposed to imitate how our brains work and are widely used in machine learning as they can capture even very complex relationships between the ‘neurons’ or processing points. Often, these patterns and relationships would be very hard for a human to spot, and the data processing and analysis for a neural network can also be automated. In Sasai’s case, he has created a network of ‘learning NLSE’, taking advantage of the similar structures between the equations and a neural network to use received data to calculate the signal loss and dispersion without the need for any additional measuring equipment like OTDR.
The reason to use neural networks for NLSE is that they cannot usually be solved analytically, but require numerical methods that undergo many iterations to try to converge on an approximate solution. One of the numerical methods used for the NLSE is known as digital backpropagation, which involves a complex series of operations, including concatenation of a linear (dispersion) operation block and a nonlinear (phase rotation) operation block (see figure 1). The phase rotation is one of the main limitations on the performance of optical communications, but this can be compensated through careful treatment with the digital backpropagation.
Fig 2: The estimated loss profile of 70 km x 4 spans systems, obtained using Sasai’s method. The dashed line is the loss profile measured using a traditional optical time-domain reflectometer (OTDR) for reference. The estimated profile fits the reference OTDR line well, and clearly exhibits the fibre loss, intentional attenuation, and even amplification by optical amplifers. It is noteworthy that the loss over multi-span link can be estimated, which is impossible for standard OTDRs. For dispersion profile, see the original paper shown in References.
The similarity between the neutral network and the NLSE means that all the coefficients that are required for the best digital backpropagation steps can be ‘learned’ from transmitted and received data. From the learned values of these coefficients, it is possible to tell the optical power and dispersion at each and every point of the fibre.
As the process is fully automated, and make sure of data that would be transmitted anyway, the automated diagnosis could be set up as a continual monitoring approach and does not need any extra measurements to be made with additional equipment. Sasai is excited by the possibility of monitoring of fibre optic cable status, as this will allow prediction of failures before they occur and advance warnings of problems to ensure even better network stability.
This will allow easier network building and monitoring to ensure even better network stability.
Prediction and modelling
No additional equipment like OTDR is needed for Sasai’s diagnostics as the received dataset will indirectly contain information on the fibre loss and dispersion. Using the NLSE, given an input value and set of fibre conditions, it is possible to predict what the output signal should look like. In the case where the output is known, it is possible to use these relationships to try and calculate what the fibre conditions must have been to generate such a signal.
Combining the knowledge of the optical propagation in the fibre from the NSLEs with the learning capabilities of the neural net, it is then possible to work out information such as which region of the fibre is experiencing high loss and pinpoint perhaps where structural works need to be carried out and where the fibres have become problematic.
Sasai’s approach opens up many new possibilities for maintaining optical fibre networks, which will become increasingly important in an ever-more connected world with a push for faster and faster internet connections. Even the rollout of new wireless transmission technologies such as 5G will require an expansion of fibre optic cabling to connect masts and other equipment.
While fibre optic cable bundles are surprisingly strong, physical damage and misalignment of fibres can be easily done, particularly when there are nearby construction works. Having full online monitoring of cables would make rapid diagnosis of problems a reality, without the need to ever call out, or potentially even consult, an engineer.
Personal Response
Do you think this type of fault monitoring will become commonplace in fibre optic installations in the future?
Yes, we are currently working on the demonstration of this monitoring technique using actually installed fibre networks. We believe that optical fibre connections will be like current LAN cables, which we can use to establish connections just by inserting them into our laptops.
Want to read more articles like this?
Sign Up!
Leave a Reply
Would you like to learn more about our services?
Subscribe to our FREE PUBLICATION |
8b1401da2f0b68f6 | Affine symmetry in mechanics of collective and internal modes. Part II. Quantum models
J. J. Sławianowski, V. Kovalchuk, A. Sławianowska,
B. Gołubowska, A. Martens, E. E. Rożko, Z. J. Zawistowski
Institute of Fundamental Technological Research,
Polish Academy of Sciences,
21 Świȩtokrzyska str., 00-049 Warsaw, Poland
e-mails: , ,
, ,
, ,
Discussed is the quantized version of the classical description of collective and internal affine modes as developed in Part I. We perform the Schrödinger quantization and reduce effectively the quantized problem from to degrees of freedom. Some possible applications in nuclear physics and other quantum many-body problems are suggested. Discussed is also the possibility of half-integer angular momentum in composed systems of spin-less particles.
Keywords: collective modes, affine invariance, Schrödinger quantization, quantum many-body problem.
A fascinating feature of our models of affine collective dynamics is their extremely wide range of applications. It covers the nuclear and molecular dynamics, micromechanics of structured continua, perhaps nanostructure and defects phenomena, macroscopic elasticity and astrophysical phenomena like vibration of stars and clouds of cosmic dust. Obviously, microphysical applications must be based on the quantized version of the theory. And one is dealing then with a very curious convolution of quantum theory with mathematical methods of continuum mechanics. It is worth to mention that there were even attempts, mainly by Barut and Ra̧czka [4], to describe the dynamics of strongly interacting elementary particles (hadrons) in terms of some peculiar, quantized continua. By the way, as French say, the extremes teach one another; it is not excluded that the dynamics of cosmic objects like neutron stars must be also described in quantum terms. They are though giant nuclei, very exotic ones, because composed exclusively of neutrons (enormous ”mass numbers” and vanishing ”atomic numbers”).
1 Quantization of classical geodetic systems
As usual, before quantizing the classical model, one has to perform some preliminary work on the level of its classical Hamiltonian dynamics [11, 16, 17, 18, 19].
Let us consider a classical geodetic system in a Riemannian manifold , where denotes the configuration space, and is the ”metric” tensor field on underlying the kinetic energy form. In terms of generalized coordinates or in Hamiltonian terms we have, respectively,
where, obviously, , .
As usual, the metric tensor gives rise to the natural measure on ,
where denotes the number of degrees of freedom, i.e., . For simplicity the square-root expression will be always denoted by . The mathematical framework of Schrödinger quantization is based on L, i.e., the Hilbert space of complex-valued wave functions on square-integrable in the -sense. Their scalar product is given by the usual formula:
The classical kinetic energy expression is replaced by the operator , where denotes the (”crossed”) Planck constant, and is the Laplace-Beltrami operator corresponding to , i.e.,
In the last expression denotes the Levi-Civita covariant differentiation in the -sense. Therefore, the kinetic energy operator is formally obtained from the corresponding classical expression (kinetic Hamiltonian) by the substitution .
If the problem is non-geodetic and some potential is admitted, the corresponding Hamilton (energy) operator is given by , where the operator acts on wave functions simply multiplying them by , i.e., . This is the reason why very often one does not distinguish graphically between and .
2 Problems concerning quantization
There are, obviously, many delicate problems concerning quantization which cannot be discussed here and, fortunately, do not interfere directly with the main subjects of our analysis. Nevertheless, we mention briefly some of them. Strictly speaking, wave functions are not scalars but complex densities of the weight so that the bilinear expression is a real scalar density of weight one, thus, a proper object for describing probability distributions [10]. But in all realistic models, and the our one is not an exception, the configuration space is endowed with some Riemannian structure. And this enables one to factorize scalar (and tensor) densities into products of scalars (tensors) and some standard densities built of the metric tensor. Therefore, the wave function may be finally identified with the complex scalar field (multicomponent one when there are internal degrees of freedom).
There are also some arguments for modifying by some scalar term proportional to the curvature scalar. Of course, such a term may be always formally interpreted as some correction potential. And besides, we usually deal with Riemannian manifolds of the constant Riemannian curvature, and then such additional terms result merely in the over-all shifting of energy levels.
In Riemann manifolds the Levi-Civita affine connection preserves the scalar product; because of this, the operator is formally anti-self-adjoint and , are formally self-adjoint. They are, however, differential operators, thus, the difficult problem of self-adjoint extensions appears. And besides, being differential operators, they are unbounded in the usual sense, thus, their spectral analysis also becomes a difficult and delicate subject. All such problems will be neglected and considered in the zeroth-order approximation of the mathematical rigor, just as it is usually done in practical physical applications. This is also justified by the fact that, as a rule, our first-order differential operators generate some well-definite global transformation groups admitting a lucid geometrical interpretation. It is typical that in such situation all subtle problems on the level of functional analysis, like the common domains, etc., may be successfully solved.
Therefore, from now on we will proceed in a ”physical” way and all terms like ”self-adjoint”, ”Hermitian”, etc. will be used in a rough way characteristic for physical papers and applied mathematics.
We shall deal almost exclusively with stationary problems when the Hamilton operator is time-independent, thus, the Schrödinger equation
will be replaced by its stationary form, i.e., by the eigenequation , where, obviously,
and is a time-independent wave function on the configuration space.
3 Multi-valuedness of wave functions
There is another delicate point concerning fundamental aspects of quantization which, however, may be of some importance and will be analyzed later on. Namely, it is claimed in all textbooks in quantum mechanics that wave functions solving reasonable Schrödinger equations must satisfy strong regularity conditions, and first of all they must be well-defined one-valued functions all over the configuration space, in addition, continuous together with their derivatives. This demand is mathematically essential in the theory of Sturm-Liouville equations and besides it has to do with quantization or, more precisely, discrete spectra of certain physical quantities. By the way, these two things are not independent.
There are, however, certain arguments that some physical systems may admit multi-valued wave functions. It is so when the configuration space is not simply connected and its fundamental group is finite. Physically it is only the squared modulus that is to be one-valued because, according to the Born statistical interpretation, it represents the probability distribution of detecting a system in various regions of the configuration space. But for the wave function itself it is sufficient to be ”locally” one-valued and sufficiently smooth, i.e., to be defined on the universal covering manifold of the configuration space . This may lead to a consistent quantum mechanics, perhaps with some kind of superselection rules. It is so in quantum mechanics of rigid body, which is sometimes expected to be a good model of the elementary particles spin [1, 2, 3]. The configuration space of the rigid body without translational motion may be identified with the proper rotation group SO (SO in dimensions), obviously, when some reference orientation and Cartesian coordinates are fixed. But it is well-known that SO is doubly-connected (and so is SO for any ). Its covering group is SU (Spin for any ). Therefore, it is really an instructive exercise, and perhaps also a promising physical hypothesis, to develop the rigid top theory with SU as configuration space [1, 2, 3]. In affinely-rigid body mechanics we are dealing with a similar situation, namely, GL and SL (more generally, GL and SL for ) are doubly-connected. This topological property is simply inherited from the corresponding one for SO (SO) on the basis of the polar decomposition [4, 24, 25]. Therefore, the standard quantization procedure in a manifold should be modified by using wave amplitudes defined on the covering manifolds , . By the way, some difficulty and mathematical curiosity appears then because these covering groups are non-linear (do not admit faithful realizations in terms of finite-dimensional matrices). This fact, known long ago to E. Cartan, was not known to physicists; a rather long time and enormous work has been lost because of this.
4 Classical background for quantization
Before going into such details we must go back to certain classical structures underlying quantization procedure. They were touched earlier in sections 2 and 3 of Part I [20] but in a rather superficial way, and besides, we concentrated there on the collective modes ruled by the linear and affine groups. This is really the main objective of our study, nevertheless, not exceptional one; it is also clear that, injecting the subject into a wider context, one attains a deeper understanding, free of accidental details.
In section 2 of Part I [20] Lie-algebraic objects were introduced. It is an important fact from the Lie group theory that they give rise to some vector fields , on invariant, respectively, under right and left translations on . Namely, for any fixed , they are given by , .
Affine velocities introduced in section 3 of Part I [20] are just the special case of Lie-algebraic objects. In the same section the dual objects , , i.e., affine spin in two representations, were introduced. These dual quantities exist also in the general case when is an arbitrary Lie group. They are then elements of the dual space, i.e., Lie co-algebra, . Their relationship with canonical momenta and configurations is given by the following formula involving evaluations of co-vectors on vectors: , where , , and , are arbitrary. Denoting the adjoint transformation of Ad by the usual symbol Ad, we have that , the obvious generalization of the corresponding relationship between laboratory and co-moving representation of affine (or usual metrical) spin. And just as in this special case, the quantities , are Hamiltonian generators of the groups of left and right regular translations , on .
In applications we are usually dealing with some special Lie groups for which many important formulas and relationships may be written in a technically simple form avoiding the general abstract terms.
As mentioned, throughout this series of articles we are dealing almost exclusively with linear groups GL L, where is a linear space, e.g., some or .
All the mentioned simplifications follow from the obvious canonical isomorphism between L and its dual L, based on the pairing . The Lie algebra is a linear subspace of L, therefore, its dual space may be canonically identified with the quotient space L, where An denotes the subspace of linear functions vanishing on . But, according to the above identification between L and L itself, An may be identified with some linear subspace of L; we shall denote it by . Therefore, the Lie co-algebra is canonically isomorphic with the corresponding quotient, i.e., . This is the general fact for linear groups and their Lie algebras. However, in some special cases, just ones of physical relevance, this quotient space admits a natural canonical isomorphism onto some distinguished linear subspace of L consisting of natural representants of cosets, e.g., in the most practical cases is canonically isomorphic with itself. For example, it is so for SO, SL, where the Lie algebras SO, SL may be identified with the duals SO, SL. By the way, for certain reasons it is more convenient to use the pairing for the orthogonal group SO.
Just as in the special case of affine objects, transformation rules for , are analogous to those for , ; we mean transformations under regular translations:
Using the identifications mentioned above (assuming that they work), we can write these rules in a form analogous to that for non-holonomic velocities,
i.e., just as it is for the affine spin.
Geometrical meaning of and is that of the momentum mappings induced, respectively, by the group of left and right regular translations. And the relationship between two versions of -objects is as follows: . The objects and may be also interpreted in terms of right- and left-invariant differential forms (co-vector fields), i.e., Maurer-Cartan forms , on the group . Assuming the afore-mentioned identification, we can express , for any fixed , in the following forms: , .
Just as in the special case of affine systems, Poisson bracket relations of - and -components are given by structure constants of . Those for have opposite signs to those for , and the mutual ones vanish (left regular translations commute with the right ones).
5 Hamiltonian systems on Lie group spaces
Geodetic Hamiltonian systems on Lie group spaces were studied by various research groups; let us mention, e.g., the prominent mathematicians like Hermann, Arnold, Mishchenko, Fomenko, and others. Obviously, the special stress was laid on models with kinetic energies (Riemann structures on ) invariant under left or right regular translations. As expected, models invariant simultaneously under left and right translations have some special properties and due to their high symmetries are computationally simplest.
From now on we assume that our configuration space is a Lie group or, more precisely, its homogeneous space with trivial isotropy groups. Also in a more general situation when isotropy groups are nontrivial (even continuous) a large amount of analysis performed on group spaces remains useful.
Obviously, just as in the special case of affinely-rigid bodies, left- and right-invariant kinetic energies are, respectively, quadratic forms of and with constant coefficients. Their underlying Riemannian structures on are locally flat if and only if is Abelian.
In both theoretical and practical problems the Hamilton language based on Poisson brackets is much more lucid and efficient than that based on Lagrange equations. If besides of geodetic inertia the system is influenced only by potential forces derivable from some potential energy term , then, obviously, the classical Hamiltonian is given by the following expression:
It is very convenient to express the Hamiltonian and all other essential quantities in terms of non-holonomic velocities and their conjugate non-holonomic (Poisson-non-commuting) momenta.
Let be some basis in the Lie algebra and be the corresponding canonical coordinates of the first kind on , i.e., . Lie-algebraic objects will be, respectively, expanded as follows: , . Using the expansion coefficients , one obtains the following simple expressions for the left- and right-invariant kinetic energies:
where the matrices , are constant, symmetric, and non-singular. The positive definiteness problem is a more delicate matter, and there are some hyperbolic-signature structures of some relevance both for physics and pure geometry.
For potential systems Legendre transformation may be easily described with the use of non-holonomic objects, respectively,
where, obviously, , are expansion coefficients of , with respect to the dual basis of the Lie co-algebra, i.e., , . The resulting Hamiltonians have, respectively, the following forms:
where, obviously, the matrices , are reciprocal to , .
If structure constants of with respect to the basis are defined according to the convention , then the Poisson brackets of -objects are given as follows:
6 Basic differential operators
Let us define basic differential operators generating left and right regular translations on . We denote them respectively by and . Their action on complex- or vector-valued functions on is defined as follows:
Their Lie-bracket (commutator) relations differ from the above Poisson rules for -quantities by signs:
Poisson brackets between -objects and functions depending only on coordinates (pull-backs of functions defined on the configuration space ) are given by
The system of Poisson brackets quoted above is sufficient for calculating any other Poisson bracket with the help of well-known properties of this operation. Thus, e.g., for any pair of functions , depending in general on all phase-space variables we have the following expression:
and, when the phase space is parameterized in terms of quantities , , we have the similar expression:
Obviously, the finite regular translations may be expressed in terms of the following exponential formulas:
with all known provisos concerning exponentiation of differential operators.
Non-holonomic velocities , depend linearly on generalized velocities , i.e., , . Similarly, and depend contragradiently on the conjugate momenta , i.e., , , where, obviously, , . This leads to the following expressions for generators:
Many of the above statements remain true for the general non-holonomic velocities and their conjugate momenta without group-theoretical background [5]. Nevertheless, there are also important facts depending on the group structure and on the properties of , respectively as the basic right- and left-invariant co-vector fields (Maurer-Cartan forms). This concerns mainly invariant volumes, scalar products, Hermiticity of basic operators, and structure of the Laplace-Beltrami operator.
In group manifolds we are usually interested in left- or right-invariant kinetic energies. Even in the special case of the double invariance the definition-based direct calculation of the corresponding Laplace-Beltrami operator and the volume element may be rather complicated. However, if the corresponding kinetic metrics is left- or right-invariant, then so is the resulting volume element. Therefore, the L-structure on may be directly based on the integration with respect to the Haar measure. As known from the theory of locally compact groups, this measure is unique up to the constant normalization factor. In the special case of compact groups this normalization may be fixed by the natural demand that the total (finite in this case) volume equals to unity. In any case, the normalization is non-essential. In applications one deals usually with so-called unimodular groups, where the left and right measures coincide [9, 13]. Obviously, for the left- or right-invariant kinetic energies the measures built of the underlying metrics are also left- or right-invariant. Therefore, they coincide with the Haar measure. This enables one to use the Haar measure from the very beginning as the integration prescription underlying the scalar product definition. This is very convenient for two reasons. First of all, for typical Lie groups appearing in physical applications the Haar measures are explicitly known. Another nice and reasonable feature of such a procedure is that once fixing the normalization we are given the standard integration procedure, whereas the use of changes the scalar product normalization for various models of (of ). This constant factor change is not very essential, but its dependence on various inertial parameters like the above , , obscures the comparison of various models.
7 Unitary transformations
It follows from the very nature of the Haar measure that on the level of wave functions the left and right regular translations are realized by unitary transformations on L. More precisely, let us define for any the operators , given by , for any . It is clear that , preserve the space L, moreover, they are unitary transformations,
The assignments are, respectively, a unitary anti-representation and representation of in L, i.e.,
To convert into representation it is sufficient to replace by . Obviously, the difference is rather cosmetical and related to the conventions concerning the definition of the superposition of mappings. Nevertheless, any neglect may lead to the accumulation of sign errors and finally to numerically wrong results.
The operators , generate the above representations, thus, we have
with all known provisos concerning domains and exponents of evidently unbounded differential operators. It is important to remember that the left-hand sides are always well-defined bounded unitary operators acting on the whole L. Unlike this, , act only on differentiable functions, they are unbounded, and the problems of domain and convergence appear on the right-hand sides of the above equations.
Unitarity of , implies that their generators , are formally anti-self-adjoint (physicists tell roughly: anti-Hermitian), i.e.,
assuming that the left- and right-hand sides are well-defined (this is the case, e.g., for differentiable compactly supported functions on ).
Now, let us introduce the following operators:
They are formally self-adjoint, i.e., ”Hermitian” in the rough language of quantum physicists:
with the same as previously provisos concerning the functions , . Obviously, denotes the (”crossed”) Planck constant.
The operators , are quantized counterparts of classical physical quantities , . They may be expressed as follows:
There is no problem of ordering of -variables and differential operators . This ordering is exactly as above, just due to the interpretation of and as infinitesimal generators of one-parameter subgroups.
8 Quantum Poisson bracket
In virtue of the above group-theoretical arguments the quantum Poisson-bracket rules are analogous to the classical ones,
Let us remind that the quantum Poisson bracket of operators is defined as
One can show (see, e.g., [5]) that the kinetic energy operators for the left- and right-invariant models are given simply by the formerly quoted formulas with the classical generators , replaced by the corresponding operators , , i.e.,
As mentioned, the literal calculation of the Laplace-Beltrami operator in terms of local coordinates is usually very complicated and the resulting formula is, as a rule, quite obscure, non-readable, and because of this practically non-useful. Unlike this, the above block expression in terms of generators is geometrically lucid and well apt for solving procedure of the Schrödinger equation. In various problems it is sufficient to operate algebraically with quantum Poisson brackets. To complete the above system of brackets let us quote expressions involving generators and position-type variables. The latter ones are operators which multiply wave functions by other functions on the configuration space, i.e., . If there is no danger of misunderstanding, we will not distinguish graphically between and . Just as on the classical level we have
Obviously, two position-type operators mutually commute.
Remark: Obviously, only for generators and position quantities the quantum and classical Poisson rules are identical. For other quantities it is no longer the case, moreover, there are problems with the very definition of quantum counterparts of other classical quantities. The very existence of the above distinguished family of physical quantities is due to the group-theoretical background of degrees of freedom.
9 Corresponding Haar measures
Let us now return to the main subject of our analysis, i.e., to the quantization of affine systems. For technical purposes we again fix some Cartesian coordinates , in , and identify analytically the configuration space with the affine group GAf. Similarly, the internal configuration space is identified with GL. The corresponding Haar measures will be denoted respectively by , , i.e., , . In terms of the binary decomposition we have the following expression:
where denotes the Haar measure on SO. Due to the compactness of SO we can, but of course need not, normalize to unity, .
The Haar measure on SL used in quantum mechanics of incompressible objects may be symbolically written with the use of Dirac distribution as follows:
10 Kinetic energy operators for affine models
Affine spin and its co-moving representation are, respectively, given by the following formally self-adjoint operators:
The usual spin and vorticity operators are respectively given by
Kinetic energy operators corresponding to the formerly described classical models of internal kinetic energies are simply obtained by replacing the classical quantities , by the above operators , without any attention to be paid to the ordering problem (just because of the group-theoretic interpretation of these quantities).
Thus, for the affine-affine model (affine both in space and in the material) we have
Similarly, for models with the mixed metrical-affine and affine-metrical invariance we have, respectively,
where , ,
Similarly, the corresponding expressions for have the following forms:
where , are linear momentum operators respectively in laboratory and co-moving representations,
Just as previously, , are contravariant reciprocals of deformation tensors: , . As mentioned, there are no affine-affine models of , and therefore, no affine-affine models of . The corresponding ”metric tensors” on GAf would have to be singular.
Another important physical quantity is the canonical momentum conjugate to the dilatational coordinate . On the quantum level it is represented by the formally self-adjoint operator
It is also convenient to use the deviatoric (shear) parts of the affine spin,
obviously, .
Due to the group-theoretical structure of the above objects as generators, the classical splitting of into incompressible (shear-rotational) and dilatational parts remains literally valid, namely, we have the following expressions:
where, obviously,
As mentioned, the SL-part of has both discrete and continuous spectrum and predicts the bounded oscillatory solutions even if no extra potential on SL is used (classically this is the geodetic model with an open subset of bounded trajectories in the complete solution). In particular, there is an open range of inertial parameters for which the spectrum is positive or at least bounded from below.
One can hope that on the basis of commutation relations for the Lie algebra SL some information concerning spectra and wave functions may be perhaps obtained without the explicit solving of differential equations.
There are GL-problems where the separation of the isochoric SL-terms is not necessary, sometimes it is even undesirable. Then it is more convenient to use the quantized version of ([20]4.45)111this kind of references means that, e.g., in Part I [20] the expression could be found in section 4 with label 45, ([20]4.46), ([20]4.47), i.e.,
where , , , and are operators of the full GL-Casimirs, i.e., we have
the above contracted products contain terms. In particular,
In particular, if the inertial constant vanishes, then the model may be interpreted in terms of one-dimensional multi-body problems in the sense of Calogero, Moser, Sutherland [15, 21], etc., quite independently of our primary motivation, i.e., -dimensional affine systems.
As mentioned, on GL, i.e., for compressible objects with dilatations, some dilatation-stabilizing potential must be introduced if the system has to possess bound states. For more general doubly isotropic potentials depending only on deformation invariants, there is no possibility of avoiding differential equations (with the help of ladder procedures). Nevertheless, the problem is then still remarkably simplified in comparison with the general case, because the quantum dynamics of deformation invariants is autonomous (in this respect the quantum problem is in a sense simpler than the classical one). The procedure is based then on the two-polar decomposition, which by the way is also very convenient on the level of purely geodetic models. In certain problems, e.g., spatially isotropic but materially anisotropic ones, the polar decomposition is also convenient.
11 Two-polar decomposition in quantum case
Let us go back to classical expressions for , , , , . On the quantum level the classical quantities , become the operators of spin and minus vorticity (4) , , i.e., Hermitian generators of the unitary groups of spatial and material rotations , , where , , acting argument-wise on wave functions. Classical quantities , were co-moving representants of tensors , , i.e., their projections onto principal axes of the Cauchy and Green deformation tensors. Their quantum counterparts, i.e., operators , are also co-moving representants of , , i.e.,
They are Hermitian generators of the argument-wise right-hand side action ([20]6.63) of SO on the wave functions. Just as in classical theory, it is convenient to introduce operators
Commutation relations for operators , , , , , are directly isomorphic with those for the generators of SO and are expressed in a straightforward way in terms of SO-structure constants.
Now we are ready to write down explicitly our kinetic energy and Hamiltonian operators in terms of the two-polar splitting. We begin with the traditional integer spin models, and later on we show how half-integer angular momentum of extended bodies may appear in a natural way.
Quantum operators , have the following form:
where, according to the formulas (1), (2), (3), and are real first-order differential operators generating left regular translations on SO, or, more precisely, on the isometric factors , of the two-polar splitting, i.e.,
In the formulas above, are functions on the manifolds of isometries from to and from to . Analytically, in Cartesian coordinates they are simply functions on SO. Matrices , are respectively - and -antisymmetric: , . Their independent components are canonical coordinates of the first kind on SO, SO (roughly, on SO),
where , are basic elements corresponding to some (arbitrary) choice of bases in , , i.e., |
21a6004caeec4648 | Symmetry, Integrability and Geometry: Methods and Applications (SIGMA)
SIGMA 13 (2017), 053, 14 pages arXiv:1704.00043
Contribution to the Special Issue on Symmetries and Integrability of Difference Equations
Symmetries of the Hirota Difference Equation
Andrei K. Pogrebkov ab
a) Steklov Mathematical Institute of Russian Academy of Science, Moscow, Russia
b) National Research University Higher School of Economics, Moscow, Russia
Received March 31, 2017, in final form July 02, 2017; Published online July 07, 2017
Continuous symmetries of the Hirota difference equation, commuting with shifts of independent variables, are derived by means of the dressing procedure. Action of these symmetries on the dependent variables of the equation is presented. Commutativity of these symmetries enables interpretation of their parameters as ''times'' of the nonlinear integrable partial differential-difference and differential equations. Examples of equations resulting in such procedure and their Lax pairs are given. Besides these, ordinary, symmetries the additional ones are introduced and their action on the Scattering data is presented.
Key words: Hirota difference equation; symmetries; integrable differential-difference and differential equations; additional symmetries.
pdf (378 kb) tex (20 kb)
1. Bogdanov L.V., Konopelchenko B.G., Generalized KP hierarchy: Möbius symmetry, symmetry constraints and Calogero-Moser system, Phys. D 152/153 (2001), 85-96, solv-int/9912005.
2. Boiti M., Pempinelli F., Pogrebkov A.K., Cauchy-Jost function and hierarchy of integrable equations, Theoret. and Math. Phys. 185 (2015), 1599-1613, arXiv:1508.02229.
3. Dryuma V.S., Analytic solution of the two-dimensional Korteweg-de Vries (KdV) equation, JETP Lett. 19 (1974), 387-388.
4. Fioravanti D., Nepomechie R.I., An inhomogeneous Lax representation for the Hirota equation, J. Phys. A: Math. Theor. 50 (2017), 054001, 14 pages, arXiv:1609.06761.
5. Grinevich P.G., Orlov A.Yu., Virasoro action on Riemann surfaces, Grassmannians, $\det \overline\partial_J$ and Segal-Wilson $\tau$-function, in Problems of Modern Quantum Field Theory (Alushta, 1989), Res. Rep. Phys., Springer, Berlin, 1989, 86-106.
6. Hirota R., Nonlinear partial difference equations. II. Discrete-time Toda equation, J. Phys. Soc. Japan 43 (1977), 2074-2078.
7. Hirota R., Discrete analogue of a generalized Toda equation, J. Phys. Soc. Japan 50 (1981), 3785-3791.
8. Kadomtsev B.B., Petviashvili V.I., On the stability of solitary waves in weakly dispersive media, Sov. Phys. Dokl. 192 (1970), 539-541.
9. Krichever I., Wiegmann P., Zabrodin A., Elliptic solutions to difference non-linear equations and related many-body problems, Comm. Math. Phys. 193 (1998), 373-396, hep-th/9704090.
10. Orlov A.Yu., Shul'man E.I., Additional symmetries of the nonlinear Schrödinger equation, Theoret. and Math. Phys. 64 (1985), 862-866.
11. Pogrebkov A.K., On time evolutions associated with the nonstationary Schrödinger equation, in L.D. Faddeev's Seminar on Mathematical Physics, Amer. Math. Soc. Transl. Ser. 2, Vol. 201, Amer. Math. Soc., Providence, RI, 2000, 239-255, math-ph/9902014.
12. Pogrebkov A.K., Hirota difference equation and a commutator identity on an associative algebra, St. Petersburg Math. J. 22 (2011), 473-483.
13. Pogrebkov A.K., Hirota difference equation: inverse scattering transform, Darboux transformation, and solitons, Theoret. and Math. Phys. 181 (2014), 1585-1598, arXiv:1407.0677.
14. Pogrebkov A.K., Commutator identities on associative algebras, the non-Abelian Hirota difference equation and its reductions, Theoret. and Math. Phys. 187 (2016), 823-834.
15. Saito S., Octahedral structure of the Hirota-Miwa equation, J. Nonlinear Math. Phys. 19 (2012), 539-550.
16. Zabrodin A.V., Hirota difference equations, Theoret. and Math. Phys. 113 (1997), 1347-1392, solv-int/9704001.
17. Zabrodin A.V., Bäcklund transformation for the Hirota difference equation, and the supersymmetric Bethe ansatz, Theoret. and Math. Phys. 155 (2008), 567-584.
18. Zakharov V.E., Manakov S.V., Construction of multidimensional nonlinear integrable systems and their solutions, Funct. Anal. Appl. 19 (1985), 89-101.
19. Zakharov V.E., Shabat A.B., A scheme for integrating the nonlinear equations of mathematical physics by the method of the inverse scattering problem. I, Funct. Anal. Appl. 8 (1977), 226-235.
Previous article Next article Contents of Volume 13 (2017) |
782684611f1f97b9 | Citation for this page in APA citation style. Close
Mortimer Adler
Rogers Albritton
Alexander of Aphrodisias
Samuel Alexander
William Alston
Louise Antony
Thomas Aquinas
David Armstrong
Harald Atmanspacher
Robert Audi
Alexander Bain
Mark Balaguer
Jeffrey Barrett
William Belsham
Henri Bergson
George Berkeley
Isaiah Berlin
Richard J. Bernstein
Bernard Berofsky
Robert Bishop
Max Black
Susanne Bobzien
Emil du Bois-Reymond
Hilary Bok
Laurence BonJour
George Boole
Émile Boutroux
Michael Burke
Joseph Keim Campbell
Rudolf Carnap
Ernst Cassirer
David Chalmers
Roderick Chisholm
Randolph Clarke
Samuel Clarke
Anthony Collins
Antonella Corradini
Diodorus Cronus
Jonathan Dancy
Donald Davidson
Mario De Caro
Daniel Dennett
Jacques Derrida
René Descartes
Richard Double
Fred Dretske
John Dupré
John Earman
Laura Waddell Ekstrom
Herbert Feigl
John Martin Fischer
Owen Flanagan
Luciano Floridi
Philippa Foot
Alfred Fouilleé
Harry Frankfurt
Richard L. Franklin
Michael Frede
Gottlob Frege
Peter Geach
Edmund Gettier
Carl Ginet
Alvin Goldman
Nicholas St. John Green
H.Paul Grice
Ian Hacking
Ishtiyaque Haji
Stuart Hampshire
Sam Harris
William Hasker
Georg W.F. Hegel
Martin Heidegger
Thomas Hobbes
David Hodgson
Shadsworth Hodgson
Baron d'Holbach
Ted Honderich
Pamela Huby
David Hume
Ferenc Huoranszki
William James
Lord Kames
Robert Kane
Immanuel Kant
Tomis Kapitan
Jaegwon Kim
William King
Hilary Kornblith
Christine Korsgaard
Saul Kripke
Andrea Lavazza
Keith Lehrer
Gottfried Leibniz
Michael Levin
George Henry Lewes
David Lewis
Peter Lipton
C. Lloyd Morgan
John Locke
Michael Lockwood
E. Jonathan Lowe
John R. Lucas
Alasdair MacIntyre
Ruth Barcan Marcus
James Martineau
Storrs McCall
Hugh McCann
Colin McGinn
Michael McKenna
Brian McLaughlin
John McTaggart
Paul E. Meehl
Uwe Meixner
Alfred Mele
Trenton Merricks
John Stuart Mill
Dickinson Miller
Thomas Nagel
Friedrich Nietzsche
John Norton
Robert Nozick
William of Ockham
Timothy O'Connor
David F. Pears
Charles Sanders Peirce
Derk Pereboom
Steven Pinker
Karl Popper
Huw Price
Hilary Putnam
Willard van Orman Quine
Frank Ramsey
Ayn Rand
Michael Rea
Thomas Reid
Charles Renouvier
Nicholas Rescher
Richard Rorty
Josiah Royce
Bertrand Russell
Paul Russell
Gilbert Ryle
Jean-Paul Sartre
Kenneth Sayre
Moritz Schlick
Arthur Schopenhauer
John Searle
Wilfrid Sellars
Alan Sidelle
Ted Sider
Henry Sidgwick
Walter Sinnott-Armstrong
Saul Smilansky
Michael Smith
Baruch Spinoza
L. Susan Stebbing
Isabelle Stengers
George F. Stout
Galen Strawson
Peter Strawson
Eleonore Stump
Francisco Suárez
Richard Taylor
Kevin Timpe
Mark Twain
Peter Unger
Peter van Inwagen
Manuel Vargas
John Venn
Kadri Vihvelin
G.H. von Wright
David Foster Wallace
R. Jay Wallace
Ted Warfield
Roy Weatherford
William Whewell
Alfred North Whitehead
David Widerker
David Wiggins
Bernard Williams
Timothy Williamson
Ludwig Wittgenstein
Susan Wolf
Michael Arbib
Bernard Baars
Gregory Bateson
John S. Bell
Charles Bennett
Ludwig von Bertalanffy
Susan Blackmore
Margaret Boden
David Bohm
Niels Bohr
Ludwig Boltzmann
Emile Borel
Max Born
Satyendra Nath Bose
Walther Bothe
Hans Briegel
Leon Brillouin
Stephen Brush
Henry Thomas Buckle
S. H. Burbury
Donald Campbell
Anthony Cashmore
Eric Chaisson
Jean-Pierre Changeux
Arthur Holly Compton
John Conway
John Cramer
E. P. Culverwell
Charles Darwin
Terrence Deacon
Louis de Broglie
Max Delbrück
Abraham de Moivre
Paul Dirac
Hans Driesch
John Eccles
Arthur Stanley Eddington
Paul Ehrenfest
Albert Einstein
Hugh Everett, III
Franz Exner
Richard Feynman
R. A. Fisher
Joseph Fourier
Lila Gatlin
Michael Gazzaniga
GianCarlo Ghirardi
J. Willard Gibbs
Nicolas Gisin
Paul Glimcher
Thomas Gold
Brian Goodwin
Joshua Greene
Jacques Hadamard
Patrick Haggard
Stuart Hameroff
Augustin Hamon
Sam Harris
Hyman Hartman
John-Dylan Haynes
Martin Heisenberg
Werner Heisenberg
John Herschel
Jesper Hoffmeyer
E. T. Jaynes
William Stanley Jevons
Roman Jakobson
Pascual Jordan
Ruth E. Kastner
Stuart Kauffman
Martin J. Klein
Simon Kochen
Stephen Kosslyn
Ladislav Kovàč
Rolf Landauer
Alfred Landé
Pierre-Simon Laplace
David Layzer
Benjamin Libet
Seth Lloyd
Hendrik Lorentz
Josef Loschmidt
Ernst Mach
Donald MacKay
Henry Margenau
James Clerk Maxwell
Ernst Mayr
Ulrich Mohrhoff
Jacques Monod
Emmy Noether
Abraham Pais
Howard Pattee
Wolfgang Pauli
Massimo Pauri
Roger Penrose
Steven Pinker
Colin Pittendrigh
Max Planck
Susan Pockett
Henri Poincaré
Daniel Pollen
Ilya Prigogine
Hans Primas
Adolphe Quételet
Juan Roederer
Jerome Rothstein
David Ruelle
Erwin Schrödinger
Aaron Schurger
Claude Shannon
David Shiang
Herbert Simon
Dean Keith Simonton
B. F. Skinner
Roger Sperry
John Stachel
Henry Stapp
Tom Stonier
Antoine Suarez
Leo Szilard
William Thomson (Kelvin)
Peter Tse
Heinz von Foerster
John von Neumann
John B. Watson
Daniel Wegner
Steven Weinberg
Paul A. Weiss
John Wheeler
Wilhelm Wien
Norbert Wiener
Eugene Wigner
E. O. Wilson
H. Dieter Zeh
Ernst Zermelo
Wojciech Zurek
Free Will
Mental Causation
James Symposium
John Stewart Bell
In 1964 John Bell showed how the 1935 "thought experiments" of Einstein, Podolsky, and Rosen (EPR) could be made into real experiments. He put limits on local "hidden variables" that might restore a deterministic physics in the form of what he called an "inequality," the violation of which would confirm standard quantum mechanics.
Some thinkers, mostly philosophers of science rather than working quantum physicists, think that Bell's work has restored the determinism in physics that Einstein had wanted and that Bell recovered the "local elements of reality" that Einstein hoped for.
But Bell himself came to the conclusion that local "hidden variables" will never be found that give the same results as quantum mechanics. This has come to be known as Bell's Theorem.
All theories that reproduce the predictions of quantum mechanics will be "nonlocal," Bell concluded. Nonlocality is an element of physical reality and it has produced some remarkable new applications of quantum physics, including quantum cryptography and quantum computing.
Bell based his idea of real experiments on the 1952 work of David Bohm. Bohm proposed an improvement on the original EPR experiment (which measured position and momentum). Bohm's reformulation of quantum mechanics postulates (undetectable) deterministic positions and trajectories for atomic particles, where the instantaneous collapse happens in a new "quantum potential" field that can move faster than light speed. But it is still a "nonlocal" theory.
So Bohm (and Bell) believed that nonlocal "hidden variables" might exist, and that some form of information could come into existence at remote "space-like separations" at speeds faster then light, if not instantaneously.
The original EPR paper was based on a question of Einstein's about two electrons fired in opposite directions from a central source with equal velocities. Einstein imagined them starting from a distance at t0 and approaching one another with high velocities, then for a short time interval from t1 to t1 + Δt in contact with one another, where experimental measurements could be made on the momenta, after which they separate. Now at a later time t2 it would be possible to make a measurement of electron 1's position and would therefore know the position of electron 2 without measuring it explicitly.
Einstein used the conservation of linear momentum to "know" the symmetric position of the other electron. This knowledge implies information about the remote electron that is available instantly. Einstein called this "spooky action-at-a-distance."
Bohm's 1952 thought experiment used two electrons that are prepared in an initial state of known total spin. If one electron spin is 1/2 in the up direction and the other is spin down or -1/2, the total spin is zero. The underlying physical law of importance is still a conservation law, in this case the conservation of angular momentum.
Since Bell's original work, many other physicists have defined other "Bell inequalities" and developed increasingly sophisticated experiments to test them. Most recent tests have used oppositely polarized photons coming from a central source. It is the total photon spin of zero that is conserved.
In his 1964 paper "On the Einstein-Podolsky-Rosen Paradox," Bell made the case for nonlocality.
The paradox of Einstein, Podolsky and Rosen was advanced as a argument that quantum mechanics could not be a complete theory but should be supplemented by additional variables. These additional variables were to restore to the theory causality and locality. In this note that idea will be formulated mathematically and shown to be incompatible with the statistical predictions of quantum mechanics. It is the requirement of locality, or more precisely that the result of a measurement on one system be unaffected by operations on a distant system with which it has interacted in the past, that creates the essential difficulty. There have been attempts to show that even without such a separability or locality requirement no 'hidden variable' interpretation of quantum mechanics is possible. These attempts have been examined [by Bell] elsewhere and found wanting. Moreover, a hidden variable interpretation of elementary quantum theory has been explicitly constructed [by Bohm]. That particular interpretation has indeed a gross non-local structure. This is characteristic, according to the result to be proved here, of any such theory which reproduces exactly the quantum mechanical predictions.
With the example advocated by Bohm and Aharonov, the EPR argument is the following. Consider a pair of spin one-half particles formed somehow in the singlet spin state and moving freely in opposite directions. Measurements can be made, say by Stern-Gerlach magnets, on selected components of the spins σ1 and σ2. If measurement of the component σ1a, where a is some unit vector, yields the value + 1 then, according to quantum mechanics, measurement of σ2a must yield the value — 1 and vice versa. Now we make the hypothesis, and it seems one at least worth considering, that if the two measurements are made at places remote from one another the orientation of one magnet does not influence the result obtained with the other.
"pre-determination" is too strong a term. The previous measurement just "determines" the later measurement.
Since we can predict in advance the result of measuring any chosen component of σ2, by previously measuring the same component of σ1, it follows that the result of any such measurement must actually be predetermined. Since the initial quantum mechanical wave function does not determine the result of an individual measurement, this predetermination implies the possibility of a more complete specification of the state.
During a mid-1980's interview by BBC Radio 3 organized by P. C. W. Davies and J. R. Brown, Bell proposed the idea of a "superdeterminism" that could explain the correlation of results in two-particle experiments without the need for faster-than-light signaling. The two experiments need only have been pre-determined by causes reaching both experiments from an earlier time.
I was going to ask whether it is still possible to maintain, in the light of experimental experience, the idea of a deterministic universe?
You know, one of the ways of understanding this business is to say that the world is super-deterministic. That not only is inanimate nature deterministic, but we, the experimenters who imagine we can choose to do one experiment rather than another, are also determined. If so, the difficulty which this experimental result creates disappears.
Free will is an illusion - that gets us out of the crisis, does it?
That's correct. In the analysis it is assumed that free will is genuine, and as a result of that one finds that the intervention of the experimenter at one point has to have consequences at a remote point, in a way that influences restricted by the finite velocity of light would not permit. If the experimenter is not free to make this intervention, if that also is determined in advance, the difficulty disappears.
Bell's superdeterminism would deny the important "free choice" of the experimenter (originally suggested by Niels Bohr and Werner Heisenberg) and later explored by John Conway and Simon Kochen. Conway and Kochen claim that the experimenters' free choice requires that atoms must have free will, something they call their Free Will Theorem.
Following John Bell's idea, Nicholas Gisin and Antoine Suarez argue that something might be coming from "outside space and time" to correlate results in their own experimental tests of Bell's Theorem. Roger Penrose and Stuart Hameroff have proposed causes coming "backward in time" to achieve the perfect EPR correlations, as has philosopher Huw Price.
A Preferred Frame?
A little later in the same BBC interview, Bell suggested that a preferred frame of reference might help to explain nonlocality and entanglement.
[Davies] Bell's inequality is, as I understand it, rooted in two assumptions: the first is what we might call objective reality - the reality of the external world, independent of our observations; the second is locality, or non-separability, or no faster-than-light signalling. Now, Aspect's experiment appears to indicate that one of these two has to go. Which of the two would you like to hang on to?
[Bell] Well, you see, I don't really know. For me it's not something where I have a solution to sell! For me it's a dilemma. I think it's a deep dilemma, and the resolution of it will not be trivial; it will require a substantial change in the way we look at things. But I would say that the cheapest resolution is something like going back to relativity as it was before Einstein, when people like Lorentz and Poincare thought that there was an aether - a preferred frame of reference - but that our measuring instruments were distorted by motion in such a way that we could not detect motion through the aether. Now, in that way you can imagine that there is a preferred frame of reference, and in this preferred frame of reference things do go faster than light. But then in other frames of reference when they seem to go not only faster than light but backwards in time, that is an optical illusion.
The standard explanation of entangled particles usually begins with an observer A, often called Alice, and a distant observer B, known as Bob. Between them is a source of two entangled particles. The two-particle wave function describing the indistinguishable particles cannot be separated into a product of two single-particle wave functions.
The problem of faster-than-light signaling arises when Alice is said to measure particle A and then puzzle over how Bob's (later) measurements of particle B can be perfectly correlated, when there is not enough time for any "influence" to travel from A to B.
Back in the 1960's, C. W. Rietdijk and Hilary Putnam argued that physical determinism could be proved to be true by considering the experiments and observers A and B in a "spacelike" separation and moving at high speed with respect to one another. Roger Penrose developed a similar argument in his book The Emperor's New Mind. It is called the Andromeda Paradox.
The EPR "paradox" is the result of a naive non-relativistic description of events. Although the two events (measurements of particles A and B) are simultaneous in our preferred frame, the space-like separation of the events means that from Alice's point of view, any knowledge of event B is out in her future. Bob likewise sees Alice's event A out in his future. These both cannot be true. Yet they are both true (and in some sense neither is true). Thus the paradox.
Instead of just one particle making an appearance in the collapse of a single-particle wave function, in the two-particle case, when either particle is measured, we know instantly those properties of the other particle that satisfy the conservation laws, including its location equidistant from, but on the opposite side of, the source, and its other properties such as spin.
You can compare the collapse of the two-particle probability amplitude above to the single-particle collapse here.
Here is an animation that illustrates the unprovable assumption that the two electrons are randomly produced in a spin-up and a spin-down state, and that they remain in those states no matter how far they separate, provided neither interacts until the measurement. An interaction does what is described as decohering the two states.
How Mysterious Is Entanglement?
We can also ask what happens if Bob is not at the same distance from the origin as Alice. When Alice detects the particle (with say spin up), at that instant the other particle also becomes determinate (with spin down) at the same distance on the other side of the origin. It now continues, in that determinate state, to Bob's measuring apparatus.
Recall Bell's description of the process (quoted above), with its mistaken bias toward assuming first one measurement is made, and the other measurement is made later.
If measurement of the component σ1 • a, where a is some unit vector, yields the value + 1 then, according to quantum mechanics, measurement of σ2 • a must yield the value — 1 and vice versa... Since we can predict in advance the result of measuring any chosen component of σ2, by previously measuring the same component of σ1, it follows that the result of any such measurement must actually be predetermined.
Since the collapse of the two-particle wave function is indeterminate, nothing is pre-determined, although σ2 is indeed determined once σ1 is measured.
In 1987, Bell contributed an article to a centenary volume for Erwin Schrödinger entitled
Are There Quantum Jumps? Schrödinger denied such jumps or any collapses of the wave function. Bell's title was inspired by two articles with the same title by Schrödinger in 1952 (Part I, Part II).
Just a year before Bell's death in 1990, physicists assembled for a conference on 62 Years of Uncertainty (referring to Werner Heisenberg's 1927 principle of indeterminacy).
John Bell's contribution to the conference was an article called "Against Measurement." In it he attacked Max Born's statistical interpretation of quantum mechanics. And he praised the new ideas of GianCarlo Ghirardi and his colleagues, Alberto Rimini and Tomaso Weber:
In the beginning, Schrödinger tried to interpret his wavefunction as giving somehow the density of the stuff of which the world is made. He tried to think of an electron as represented by a wavepacket — a wave-function appreciably different from zero only over a small region in space. The extension of that region he thought of as the actual size of the electron — his electron was a bit fuzzy. At first he thought that small wavepackets, evolving according to the Schrödinger equation, would remain small. But that was wrong. Wavepackets diffuse, and with the passage of time become indefinitely extended, according to the Schrödinger equation. But however far the wavefunction has extended, the reaction of a detector to an electron remains spotty. So Schrödinger's 'realistic' interpretation of his wavefunction did not survive.
Then came the Born interpretation. The wavefunction gives not the density of stuff, but gives rather (on squaring its modulus) the density of probability. Probability of what exactly? Not of the electron being there, but of the electron being found there, if its position is 'measured.'
Why this aversion to 'being' and insistence on 'finding'? The founding fathers were unable to form a clear picture of things on the remote atomic scale. They became very aware of the intervening apparatus, and of the need for a 'classical' base from which to intervene on the quantum system. And so the shifty split.
The kinematics of the world, in this orthodox picture, is given a wavefunction (maybe more than one?) for the quantum part, and classical variables — variables which have values — for the classical part: (Ψ(t, q, ...), X(t),...). The Xs are somehow macroscopic. This is not spelled out very explicitly. The dynamics is not very precisely formulated either. It includes a Schrödinger equation for the quantum part, and some sort of classical mechanics for the classical part, and 'collapse' recipes for their interaction.
It seems to me that the only hope of precision with the dual (Ψ, x) kinematics is to omit completely the shifty split, and let both Ψ and x refer to the world as a whole. Then the xs must not be confined to some vague macroscopic scale, but must extend to all scales. In the picture of de Broglie and Bohm, every particle is attributed a position x(t). Then instrument pointers — assemblies of particles have positions, and experiments have results. The dynamics is given by the world Schrödinger equation plus precise 'guiding' equations prescribing how the x(t)s move under the influence of Ψ. Particles are not attributed angular momenta, energies, etc., but only positions as functions of time. Peculiar 'measurement' results for angular momenta, energies, and so on, emerge as pointer positions in appropriate experimental setups. Considerations of KG [Kurt Gottfried] and vK [N. G. van Kampen] type, on the absence (FAPP) [For All Practical Purposes] of macroscopic interference, take their place here, and an important one, is showing how usually we do not have (FAPP) to pay attention to the whole world, but only to some subsystem and can simplify the wave-function... FAPP.
The Born-type kinematics (Ψ, X) has a duality that the original 'density of stuff' picture of Schrödinger did not. The position of the particle there was just a feature of the wavepacket, not something in addition. The Landau—Lifshitz approach can be seen as maintaining this simple non-dual kinematics, but with the wavefunction compact on a macroscopic rather than microscopic scale. We know, they seem to say, that macroscopic pointers have definite positions. And we think there is nothing but the wavefunction. So the wavefunction must be narrow as regards macroscopic variables. The Schrödinger equation does not preserve such narrowness (as Schrödinger himself dramatised with his cat). So there must be some kind of 'collapse' going on in addition, to enforce macroscopic narrowness. In the same way, if we had modified Schrödinger's evolution somehow we might have prevented the spreading of his wavepacket electrons. But actually the idea that an electron in a ground-state hydrogen atom is as big as the atom (which is then perfectly spherical) is perfectly tolerable — and maybe even attractive. The idea that a macroscopic pointer can point simultaneously in different directions, or that a cat can have several of its nine lives at the same time, is harder to swallow. And if we have no extra variables X to express macroscopic definiteness, the wavefunction itself must be narrow in macroscopic directions in the configuration space. This the Landau—Lifshitz collapse brings about. It does so in a rather vague way, at rather vaguely specified times.
In the Ghirardi—Rimini—Weber scheme (see the contributions of Ghirardi, Rimini, Weber, Pearle, Gisin and Diosi presented at 62 Years of Uncertainty, Erice, Italy, 5-14 August 1989) this vagueness is replaced by mathematical precision. The Schrödinger wavefunction even for a single particle, is supposed to be unstable, with a prescribed mean life per particle, against spontaneous collapse of a prescribed form. The lifetime and collapsed extension are such that departures of the Schrödinger equation show up very rarely and very weakly in few-particle systems. But in macroscopic systems, as a consequence of the prescribed equations, pointers very rapidly point, and cats are very quickly killed or spared.
The orthodox approaches, whether the authors think they have made derivations or assumptions, are just fine FAPP — when used with the good taste and discretion picked up from exposure to good examples. At least two roads are open from there towards a precise theory, it seems to me. Both eliminate the shifty split. The de Broglie—Bohm-type theories retain, exactly, the linear wave equation, and so necessarily add complementary variables to express the non-waviness of the world on the macroscopic scale. The GRW-type theories have nothing in the kinematics but the wavefunction. It gives the density (in a multidimensional configuration space!) of stuff. To account for the narrowness of that stuff in macroscopic dimensions, the linear Schrödinger equation has to be modified, in this GRW picture by a mathematically prescribed spontaneous collapse mechanism.
The big question, in my opinion, is which, if either, of these two precise pictures can be redeveloped in a Lorentz invariant way.
...All historical experience confirms that men might not achieve the possible if they had not, time and time again, reached out for the impossible. (Max Weber)
...we do not know where we are stupid until we stick our necks out. (R. P. Feynman)
On the 22nd of January 1990, Bell gave a talk explaining his theorem at CERN in Geneva
organized by Antoine Suarez, director of the
Center for Quantum Philosophy.
There are links on the CERN website to the
video of this talk, and to a transcription.
In this talk, Bell summarizes the situation as follows:
It just is a fact that quantum mechanical predictions and experiments, in so far as they have been done, do not agree with [my] inequality. And that's just a brutal fact of nature...that's just the fact of the situation; the Einstein program fails, that's too bad for Einstein, but should we worry about that?
I cannot say that action at a distance is required in physics. But I can say that you cannot get away with no action at a distance. You cannot separate off what happens in one place and what happens in another. Somehow they have to be described and explained jointly.
Bell gives three reasons for not worrying.
1. Nonlocality is unavoidable, even if it looks like "action at a distance."
[It does not, with a proper understanding of quantum physics. See our EPR page.]
2. Because the events are in a spacelike separation, either one can occur before the other in some relativistic frame, so no "causal" connection can exist between them.
3. No faster-than-light signals can be sent using entanglement and nonlocality.
He concludes:
So as a solution of this situation, I think we cannot just say 'Oh oh, nature is not like that.' I think you must find a picture in which perfect correlations are natural, without implying determinism, because that leads you back to nonlocality. And also in this independence as far as our individual experiences goes, our independence of the rest of the world is also natural. So the connections have to be very subtle, and I have told you all that I know about them. Thank you.
The work of GianCarlo Ghirardi that Bell endorsed is a scheme that makes the wave function collapse by adding small (order of 10-24) nonlinear and stochastic terms to the linear Schrödinger equation. GRW can not predict when and where their collapse occurs (it is simply random), but the contact with macroscopic objects such as a measuring apparatus (with the order of 1024 atoms) makes the probability of collapse of order unity.
Information physics removes Bell's "shifty split" without "hidden variables" or making ad hoc non-linear additions like those of Ghirardi-Rimini-Weber to the linear Schrödinger equation. The "moment" at which the boundary between quantum and classical worlds occurs is the moment that irreversible observable information enters the universe.
So we can now look at John Bell's diagram of possible locations for his "shifty split" and identify the correct moment - when irreversible information enters the universe.
In the information physics solution to the problem of measurement, the timing and location of Bell's "shifty split" (the "cut" or "Schnitt" of Heisenberg and von Neumann) are identified with the interaction between quantum system and classical apparatus that leaves the apparatus in an irreversible stable state providing information to the observer.
As Bell may have seen, it is therefore not a "measurement" by a conscious observer that is needed to "collapse" wave functions. It is the irreversible interaction of the quantum system with another system, whether quantum or approximately classical. The interaction must be one that changes the information about the system. And that means a local entropy decrease and overall entropy increase to make the information stable enough to be observed by an experimenter and therefore be a measurement.
Against Measurement (PDF)
On the Einstein-Podolsky-Rosen Paradox (PDF)
Are There Quantum Jumps? (PDF, Excerpt)
BBC Interview (PDF, Excerpt)
For Teachers
For Scholars
Chapter 1.5 - The Philosophers Chapter 2.1 - The Problem of Knowledge
Home Part Two - Knowledge
Normal | Teacher | Scholar |
7d38cd89e52c36e5 | Sample records for archaean cellular life
1. Contributions to late Archaean sulphur cycling by life on land (United States)
Stüeken, Eva E.; Catling, David C.; Buick, Roger
Evidence in palaeosols suggests that life on land dates back to at least 2.76Gyr ago. However, the biogeochemical effects of Archaean terrestrial life are thought to have been limited, owing to the lack of a protective ozone shield from ultraviolet radiation for terrestrial organisms before the rise of atmospheric oxygen levels several hundred million years later. Records of chromium delivery from the continents suggest that microbial mineral oxidation began at least 2.48Gyr ago but do not indicate when the terrestrial biosphere began to dominate important biogeochemical cycles. Here we combine marine sulphur abundance data with a mass balance model of the sulphur cycle to estimate the effects of the Archaean and early Proterozoic terrestrial biosphere on sulphur cycling. We find that terrestrial oxidation of pyrite by microbes using oxygen has contributed a substantial fraction of the total sulphur weathering flux since at least 2.5Gyr ago, with probable evidence of such activity 2.7-2.8Gyr ago. The late Archaean onset of terrestrial sulphur cycling is supported by marine molybdenum abundance data and coincides with a shift to more sulphidic ocean conditions. We infer that significant microbial land colonization began by 2.7-2.8Gyr ago. Our identification of pyrite oxidation at this time provides further support for the appearance of molecular oxygen several hundred million years before the Great Oxidation Event.
2. The photochemical origins of life and photoreaction of ferrous ion in the archaean oceans (United States)
Mauzerall, David C.
A general argument is made for the photochemical origins of life. A constant flux of free energy is required to maintain the organized state of matter called life. Solar photons are the unique source of the large amounts of energy probably required to initiate this organization and certainly required for the evolution of life to occur. The completion of this argument will require the experimental determination of suitable photochemical reactions. It is shown that biogenetic porphyrins readily photooxidize substrates and emit hydrogen in the presence of a catalyst. These results are consistent with the Granick hypothesis, which relates a biosynthetic pathway to its evolutionary origin. It has been shown that photoexcitation of ferrous ion at neutral pH with near ultraviolet light produces hydrogen with high quantum yield. This same simple system may reduce carbon dioxide to formaldehyde and further products. These reactions offer a solution to the dilemma confronting the Oparin-Urey-Miller model of the chemical origin of life. If carbon dioxide is the main form of carbon on the primitive earth, the ferrous photoreaction may provide the reduced carbon necessary for the formation of amino acids and other biogenic molecules. These results suggest that this progenitor of modern photosynthesis may have contributed to the chemical origins of life.
3. Game of Life Cellular Automata
CERN Document Server
Adamatzky, Andrew
In the late 1960s, British mathematician John Conway invented a virtual mathematical machine that operates on a two-dimensional array of square cell. Each cell takes two states, live and dead. The cells' states are updated simultaneously and in discrete time. A dead cell comes to life if it has exactly three live neighbours. A live cell remains alive if two or three of its neighbours are alive, otherwise the cell dies. Conway's Game of Life became the most programmed solitary game and the most known cellular automaton. The book brings together results of forty years of study into computational
4. The origins of cellular life. (United States)
Schrum, Jason P; Zhu, Ting F; Szostak, Jack W
Understanding the origin of cellular life on Earth requires the discovery of plausible pathways for the transition from complex prebiotic chemistry to simple biology, defined as the emergence of chemical assemblies capable of Darwinian evolution. We have proposed that a simple primitive cell, or protocell, would consist of two key components: a protocell membrane that defines a spatially localized compartment, and an informational polymer that allows for the replication and inheritance of functional information. Recent studies of vesicles composed of fatty-acid membranes have shed considerable light on pathways for protocell growth and division, as well as means by which protocells could take up nutrients from their environment. Additional work with genetic polymers has provided insight into the potential for chemical genome replication and compatibility with membrane encapsulation. The integration of a dynamic fatty-acid compartment with robust, generalized genetic polymer replication would yield a laboratory model of a protocell with the potential for classical Darwinian biological evolution, and may help to evaluate potential pathways for the emergence of life on the early Earth. Here we discuss efforts to devise such an integrated protocell model.
5. Selection for Protein Kinetic Stability Connects Denaturation Temperatures to Organismal Temperatures and Provides Clues to Archaean Life (United States)
Romero-Romero, M. Luisa; Risso, Valeria A.; Martinez-Rodriguez, Sergio; Gaucher, Eric A.; Ibarra-Molero, Beatriz; Sanchez-Ruiz, Jose M.
The relationship between the denaturation temperatures of proteins (Tm values) and the living temperatures of their host organisms (environmental temperatures: TENV values) is poorly understood. Since different proteins in the same organism may show widely different Tm’s, no simple universal relationship between Tm and TENV should hold, other than Tm≥TENV. Yet, when analyzing a set of homologous proteins from different hosts, Tm’s are oftentimes found to correlate with TENV’s but this correlation is shifted upward on the Tm axis. Supporting this trend, we recently reported Tm’s for resurrected Precambrian thioredoxins that mirror a proposed environmental cooling over long geological time, while remaining a shocking ~50°C above the proposed ancestral ocean temperatures. Here, we show that natural selection for protein kinetic stability (denaturation rate) can produce a Tm↔TENV correlation with a large upward shift in Tm. A model for protein stability evolution suggests a link between the Tm shift and the in vivo lifetime of a protein and, more specifically, allows us to estimate ancestral environmental temperatures from experimental denaturation rates for resurrected Precambrian thioredoxins. The TENV values thus obtained match the proposed ancestral ocean cooling, support comparatively high Archaean temperatures, and are consistent with a recent proposal for the environmental temperature (above 75°C) that hosted the last universal common ancestor. More generally, this work provides a framework for understanding how features of protein stability reflect the environmental temperatures of the host organisms. PMID:27253436
6. The origin of cellular life (United States)
Ingber, D. E.
This essay presents a scenario of the origin of life that is based on analysis of biological architecture and mechanical design at the microstructural level. My thesis is that the same architectural and energetic constraints that shape cells today also guided the evolution of the first cells and that the molecular scaffolds that support solid-phase biochemistry in modern cells represent living microfossils of past life forms. This concept emerged from the discovery that cells mechanically stabilize themselves using tensegrity architecture and that these same building rules guide hierarchical self-assembly at all size scales (Sci. Amer 278:48-57;1998). When combined with other fundamental design principles (e.g., energy minimization, topological constraints, structural hierarchies, autocatalytic sets, solid-state biochemistry), tensegrity provides a physical basis to explain how atomic and molecular elements progressively self-assembled to create hierarchical structures with increasingly complex functions, including living cells that can self-reproduce.
7. Geological constraints on detecting the earliest life on Earth: a perspective from the Early Archaean (older than 3.7 Gyr) of southwest Greenland
Fedo, Christopher M; Whitehouse, Martin J.; Kamber, Balz S.
At greater than 3.7 Gyr, Earth's oldest known supracrustal rocks, comprised dominantly of mafic igneous with less common sedimentary units including banded iron formation (BIF), are exposed in southwest Greenland. Regionally, they were intruded by younger tonalites, and then both were intensely dynamothermally metamorphosed to granulite facies (the highest pressures and temperatures generally encountered in the Earth's crust during metamorphism) in the Archaean and subsequently at lower grade...
8. Aerobic respiration in the Archaean? (United States)
Towe, K M
The Earth's atmosphere during the Archaean era (3,800-2,500 Myr ago) is generally thought to have been anoxic, with the partial pressure of atmospheric oxygen about 10(-12) times the present value. In the absence of aerobic consumption of oxygen produced by photosynthesis in the ocean, the major sink for this oxygen would have been oxidation of dissolved Fe(II). Atmospheric oxygen would also be removed by the oxidation of biogenic methane. But even very low estimates of global primary productivity, obtained from the amounts of organic carbon preserved in Archaean rocks, seem to require the sedimentation of an unrealistically large amount of iron and the oxidation of too much methane if global anoxia was to be maintained. I therefore suggest that aerobic respiration must have developed early in the Archaean to prevent a build-up of atmospheric oxygen before the Proterozoic. An atmosphere that contained a low (0.2-0.4%) but stable proportion of oxygen is required.
9. Rapid evolutionary innovation during an Archaean genetic expansion. (United States)
David, Lawrence A; Alm, Eric J
The natural history of Precambrian life is still unknown because of the rarity of microbial fossils and biomarkers. However, the composition of modern-day genomes may bear imprints of ancient biogeochemical events. Here we use an explicit model of macroevolution including gene birth, transfer, duplication and loss events to map the evolutionary history of 3,983 gene families across the three domains of life onto a geological timeline. Surprisingly, we find that a brief period of genetic innovation during the Archaean eon, which coincides with a rapid diversification of bacterial lineages, gave rise to 27% of major modern gene families. A functional analysis of genes born during this Archaean expansion reveals that they are likely to be involved in electron-transport and respiratory pathways. Genes arising after this expansion show increasing use of molecular oxygen (P = 3.4 × 10(-8)) and redox-sensitive transition metals and compounds, which is consistent with an increasingly oxygenating biosphere.
10. The Hadean-Archaean environment. (United States)
Sleep, Norman H
A sparse geological record combined with physics and molecular phylogeny constrains the environmental conditions on the early Earth. The Earth began hot after the moon-forming impact and cooled to the point where liquid water was present in approximately 10 million years. Subsequently, a few asteroid impacts may have briefly heated surface environments, leaving only thermophile survivors in kilometer-deep rocks. A warm 500 K, 100 bar CO(2) greenhouse persisted until subducted oceanic crust sequestered CO(2) into the mantle. It is not known whether the Earth's surface lingered in a approximately 70 degrees C thermophile environment well into the Archaean or cooled to clement or freezing conditions in the Hadean. Recently discovered approximately 4.3 Ga rocks near Hudson Bay may have formed during the warm greenhouse. Alkalic rocks in India indicate carbonate subduction by 4.26 Ga. The presence of 3.8 Ga black shales in Greenland indicates that S-based photosynthesis had evolved in the oceans and likely Fe-based photosynthesis and efficient chemical weathering on land. Overall, mantle derived rocks, especially kimberlites and similar CO(2)-rich magmas, preserve evidence of subducted upper oceanic crust, ancient surface environments, and biosignatures of photosynthesis.
11. Chaotic Encryption Method Based on Life-Like Cellular Automata
CERN Document Server
Machicao, Marina Jeaneth; Bruno, Odemir M
We propose a chaotic encryption method based on Cellular Automata(CA), specifically on the family called the "Life-Like" type. Thus, the encryption process lying on the pseudo-random numbers generated (PRNG) by each CA's evolution, which transforms the password as the initial conditions to encrypt messages. Moreover, is explored the dynamical behavior of CA to reach a "good" quality as PRNG based on measures to quantify "how chaotic a dynamical system is", through the combination of the entropy, Lyapunov exponent, and Hamming distance. Finally, we present the detailed security analysis based on experimental tests: DIEHARD and ENT suites, as well as Fouriers Power Spectrum, used as a security criteria.
12. Adakitic magmas: modern analogues of Archaean granitoids (United States)
Martin, Hervé
Both geochemical and experimental petrological research indicate that Archaean continental crust was generated by partial melting of an Archaean tholeiite transformed into a garnet-bearing amphibolite or eclogite. The geodynamic context of tholeiite melting is the subject of controversy. It is assumed to be either (1) subduction (melting of a hot subducting slab), or (2) hot spot (melting of underplated basalts). These hypotheses are considered in the light of modern adakite genesis. Adakites are intermediate to felsic volcanic rocks, andesitic to rhyolitic in composition (basaltic members are lacking). They have trondhjemitic affinities (high-Na 2O contents and K 2O/Na 2O˜0.5) and their Mg no. (0.5), Ni (20-40 ppm) and Cr (30-50 ppm) contents are higher than in typical calc-alkaline magmas. Sr contents are high (>300 ppm, until 2000 ppm) and REE show strongly fractionated patterns with very low heavy REE (HREE) contents (Yb≤1.8 ppm, Y≤18 ppm). Consequently, high Sr/Y and La/Yb ratios are typical and discriminating features of adakitic magmas, indicative of melting of a mafic source where garnet and/or hornblende are residual phases. Adakitic magmas are only found in subduction zone environments, exclusively where the subduction and/or the subducted slab are young (situation is well-exemplified in Southern Chile where the Chile ridge is subducted and where the adakitic character of the lavas correlates well with the young age of the subducting oceanic lithosphere. In typical subduction zones, the subducted lithosphere is older than 20 Ma, it is cool and the geothermal gradient along the Benioff plane is low such that the oceanic crust dehydrates before it reaches the solidus temperature of hydrated tholeiite. Consequently, the basaltic slab cannot melt. The released large ion lithophile element (LILE)-rich fluids rise up into the mantle wedge, inducing both its metasomatism and partial melting. Afterwards, the residue is made up of olivine
13. Cellular Metabolic Rate Is Influenced by Life-History Traits in Tropical and Temperate Birds
Ana Gabriela Jimenez; James Van Brocklyn; Matthew Wortman; Williams, Joseph B.
In general, tropical birds have a "slow pace of life," lower rates of whole-animal metabolism and higher survival rates, than temperate species. A fundamental challenge facing physiological ecologists is the understanding of how variation in life-history at the whole-organism level might be linked to cellular function. Because tropical birds have lower rates of whole-animal metabolism, we hypothesized that cells from tropical species would also have lower rates of cellular metabolism than cel...
14. Membrane-Based Functions in the Origin of Cellular Life (United States)
Chipot, Christophe; New, Michael H.; Schweighofer, Karl; Pohorille, Andrew; Wilson, Michael A.
Our objective is to help explain how the earliest ancestors of contemporary cells (protocells) performed their essential functions employing only the molecules available in the protobiological milieu. Our hypothesis is that vesicles, built of amphiphilic, membrane-forming materials, emerged early in protobiological evolution and served as precursors to protocells. We further assume that the cellular functions associated with contemporary membranes, such as capturing and, transducing of energy, signaling, or sequestering organic molecules and ions, evolved in these membrane environments. An alternative hypothesis is that these functions evolved in different environments and were incorporated into membrane-bound structures at some later stage of evolution. We focus on the application of the fundamental principles of physics and chemistry to determine how they apply to the formation of a primitive, functional cell. Rather than attempting to develop specific models for cellular functions and to identify the origin of the molecules which perform these functions, our goal is to define the structural and energetic conditions that any successful model must fulfill, therefore providing physico-chemical boundaries for these models. We do this by carrying out large-scale, molecular level computer simulations on systems of interest.
15. Hydrothermal Conditions and the Origin of Cellular Life. (United States)
Deamer, David W; Georgiou, Christos D
The conditions and properties of hydrothermal vents and hydrothermal fields are compared in terms of their ability to support processes related to the origin of life. The two sites can be considered as alternative hypotheses, and from this comparison we propose a series of experimental tests to distinguish between them, focusing on those that involve concentration of solutes, self-assembly of membranous compartments, and synthesis of polymers. Key Word: Hydrothermal systems.
16. Membrane-Based Functions in the Origin of Cellular Life (United States)
Wilson, Michael A.
How simple membrane peptides performed such essential proto-cellular functions as transport of ions and organic matter across membranes separating the interior of the cell from the environment, capture and utilization of energy, and transduction of environmental signals, is a key question in protobiological evolution. On the basis of detailed, molecular-level computer simulations we investigate how these peptides insert into membranes, self-assemble into higher-order structures and acquire functions. We have studied the insertion of an a-helical peptide containing leucine (L) and serine (S) of the form (LSLLLSL)S into a model membrane. The transmembrane state is metastable, and approximately 15 kcal/mol is required to insert the peptide into the membrane. Investigations of dimers formed by (LSLLLSL)S and glycophorin A demonstrate how the favorable free energy of helix association can offset the unfavorable free energy of insertion, leading to self- assembly of peptide helices in the membrane. An example of a self-assembled structure is the tetrameric transmembrane pore of the influenza virus M2 protein, which is an efficient and selective voltage-gated proton channel. Our simulations explain the gating mechanism and provide guidelines how to reengineering the channel to act as a simple proton pump. In general, emergence of integral membrane proteins appears to be quite feasible and may be easier to envision than the emergence of water-soluble proteins.
17. Stable isotope composition and volume of Early Archaean oceans
DEFF Research Database (Denmark)
Pope, Emily Catherine; Rosing, Minik Thorleif; Bird, Dennis K.
Oxygen and hydrogen isotope compositions of seawater are controlled by volatile fluxes between mantle, lithospheric (oceanic and continental crust) and atmospheric reservoirs. Throughout geologic time oxygen was likely conserved within these Earth system reservoirs, but hydrogen was not, as it can...... escape to space [1]. Hydrogen isotope ratios of serpentinites from the ~3.8Ga Isua Supracrustal Belt in West Greenland are between -53 and -99‰; the highest values are in antigorite ± lizardite serpentinites from a low-strain lithologic domain where hydrothermal reaction of Archaean seawater with oceanic...... of continents present at that time), and the mass of Early Archaean oceans to ~109 to 126% of present day oceans. Oxygen isotope analyses from these Isua serpentinites (δ18O = +0.1 to 5.6‰ relative to VSMOW) indicate that early Archaean δ18OSEAWATER similar to modern oceans. Our observations suggest...
18. Cellular metabolic rate is influenced by life-history traits in tropical and temperate birds. (United States)
Jimenez, Ana Gabriela; Van Brocklyn, James; Wortman, Matthew; Williams, Joseph B
In general, tropical birds have a "slow pace of life," lower rates of whole-animal metabolism and higher survival rates, than temperate species. A fundamental challenge facing physiological ecologists is the understanding of how variation in life-history at the whole-organism level might be linked to cellular function. Because tropical birds have lower rates of whole-animal metabolism, we hypothesized that cells from tropical species would also have lower rates of cellular metabolism than cells from temperate species of similar body size and common phylogenetic history. We cultured primary dermal fibroblasts from 17 tropical and 17 temperate phylogenetically-paired species of birds in a common nutritive and thermal environment and then examined basal, uncoupled, and non-mitochondrial cellular O2 consumption (OCR), proton leak, and anaerobic glycolysis (extracellular acidification rates [ECAR]), using an XF24 Seahorse Analyzer. We found that multiple measures of metabolism in cells from tropical birds were significantly lower than their temperate counterparts. Basal and uncoupled cellular metabolism were 29% and 35% lower in cells from tropical birds, respectively, a decrease closely aligned with differences in whole-animal metabolism between tropical and temperate birds. Proton leak was significantly lower in cells from tropical birds compared with cells from temperate birds. Our results offer compelling evidence that whole-animal metabolism is linked to cellular respiration as a function of an animal's life-history evolution. These findings are consistent with the idea that natural selection has uniquely fashioned cells of long-lived tropical bird species to have lower rates of metabolism than cells from shorter-lived temperate species.
Directory of Open Access Journals (Sweden)
Ana Gabriela Jimenez
20. Trade-off between cellular immunity and life span in mealworm beetles Tenebrio molitor
Institute of Scientific and Technical Information of China (English)
Indrikis KRAMS; Jan(i)na DAUK(S)TE; Inese KIVLENIECE; Ants KAASIK; Tatjana KRAMA; Todd M.FREEBERG; Markus J.RANTALA
Encapsulation is a nonspecific,cellular response through which insects defend themselves against multicellular pathogens.During this immune reaction,haemocytes recognize an object as foreign and cause other haemocytes to aggregate and form a capsule around the object,often consisting of melanized cells.The process of melanisation is accompanied by the formation of potentially toxic reactive oxygen species,which can kill not only pathogens but also host cells.In this study we tested whether the encapsulation response is costly in mealworm beetles Tenebrio molitor.We found a negative relationship between the duration of implantation via a nylon monofilament and remaining life span.We also found a negative relationship between the strength of immune response and remaining life span,suggesting that cellular immunity is costly in T.molitor,and that there is a trade-off between immune response and remaining life span.However,this relationship disappeared at 31-32 hours of implantation at 25 ± 2℃.As the disappearance of a relationship between duration of implantation and lifespan coincided with the highest values of encapsulation response,we concluded that the beetles stopped investment in the production of melanotic cells,as the implant,a synthetic parasite,was fully isolated from the host's tissues.
1. Trade-off between cellular immunity and life span in mealworm beetles Tenebrio molitor
Directory of Open Access Journals (Sweden)
Indrikis KRAMS, Janīna DAUKŠTE, Inese KIVLENIECE, Ants KAASIK, Tatjana KRAMA, Todd M. REEBERG, Markus J. RANTALA
Full Text Available Encapsulation is a nonspecific, cellular response through which insects defend themselves against multicellular pathogens. During this immune reaction, haemocytes recognize an object as foreign and cause other haemocytes to aggregate and form a capsule around the object, often consisting of melanized cells. The process of melanisation is accompanied by the formation of potentially toxic reactive oxygen species, which can kill not only pathogens but also host cells. In this study we tested whether the encapsulation response is costly in mealworm beetles Tenebrio molitor. We found a negative relationship between the duration of implantation via a nylon monofilament and remaining life span. We also found a negative relationship between the strength of immune response and remaining life span, suggesting that cellular immunity is costly in T. molitor, and that there is a trade-off between immune response and remaining life span. However, this relationship disappeared at 31-32 hours of implantation at 25 ± 2℃. As the disappearance of a relationship between duration of implantation and lifespan coincided with the highest values of encapsulation response, we concluded that the beetles stopped investment in the production of melanotic cells, as the implant, a synthetic parasite, was fully isolated from the host’s tissues [Current Zoology 59 (3: 340–346, 2013].
2. Fractional Crystallisation of Archaean Trondhjemite Magma at 12-7 Kbar: Constraints on Rheology of Archaean Continental Crust (United States)
Sarkar, Saheli; Saha, Lopamudra; Satyanarayan, Manavalan; Pati, Jayanta
Fractional Crystallisation of Archaean Trondhjemite Magma at 12-7 Kbar: Constraints on Rheology of Archaean Continental Crust Sarkar, S.1, Saha, L.1, Satyanarayan, M2. and Pati, J.K.3 1. Department of Earth Sciences, Indian Institute of Technology Roorkee, Roorkee-247667, Haridwar, India, 2. HR-ICPMS Lab, Geochemistry Group, CSIR-National Geophysical Research Institute, Hyderabad-50007, India. 3. Department of Earth and Planetary Sciences, Nehru Science Centre, University of Allahabad, Allahabad-211002, India. Tonalite-Trondhjemite-Granodiorite (TTGs) group of rocks, that mostly constitute the Archaean continental crusts, evolved through a time period of ~3.8 Ga-2.7 Ga with major episodes of juvenile magma generations at ~3.6 Ga and ~2.7 Ga. Geochemical signatures, especially HREE depletions of most TTGs conform to formation of this type of magma by partial melting of amphibolites or eclogites at 15-20 kbar pressure. While TTGs (mostly sodic in compositions) dominates the Eoarchaean (~3.8-3.6 Ga) to Mesoarchaean (~3.2-3.0 Ga) domains, granitic rocks (with significantly high potassium contents) became more dominant in the Neoarchaean period. The most commonly accepted model proposed for the formation of the potassic granite in the Neoarchaean time is by partial melting of TTGs along subduction zones. However Archaean granite intrusive into the gabbro-ultramafic complex from Scourie, NW Scotland has been interpreted to have formed by fractional crystallization of hornblende and plagioclase from co-existing trondhjemitic gneiss. In this study we have studied fractional crystallization paths from a Mesoarchaean trondhjemite from the central Bundelkhand craton, India using MELTS algorithm. Fractional crystallization modeling has been performed at pressure ranges of 20 kbar to 7 kbar. Calculations have shown crystallization of garnet-clinopyroxene bearing assemblages with progressive cooling of the magma at 20 kbar. At pressure ranges 19-16 kbar, solid phases
3. Modeling of the competition life cycle using the software complex of cellular automata PyCAlab (United States)
Berg, D. B.; Beklemishev, K. A.; Medvedev, A. N.; Medvedeva, M. A.
The aim of the work is to develop a numerical model of the life cycle of competition on the basis of software complex cellular automata PyCAlab. The model is based on the general patterns of growth of various systems in resource-limited settings. At examples it is shown that the period of transition from an unlimited growth of the market agents to the stage of competitive growth takes quite a long time and may be characterized as monotonic. During this period two main strategies of competitive selection coexist: 1) capture of maximum market space with any reasonable costs; 2) saving by reducing costs. The obtained results allow concluding that the competitive strategies of companies must combine two mentioned types of behavior, and this issue needs to be given adequate attention in the academic literature on management. The created numerical model may be used for market research when developing of the strategies for promotion of new goods and services.
4. Cellular automata-based artificial life system of horizontal gene transfer
Directory of Open Access Journals (Sweden)
Ji-xin Liu
Full Text Available Mutation and natural selection is the core of Darwin's idea about evolution. Many algorithms and models are based on this idea. However, in the evolution of prokaryotes, more and more researches have indicated that horizontal gene transfer (HGT would be much more important and universal than the authors had imagined. Owing to this mechanism, the prokaryotes not only become adaptable in nearly any environment on Earth, but also form a global genetic bank and a super communication network with all the genes of the prokaryotic world. Under this background, they present a novel cellular automata model general gene transfer to simulate and study the vertical gene transfer and HGT in the prokaryotes. At the same time, they use Schrodinger's life theory to formulate some evaluation indices and to discuss the intelligence and cognition of prokaryotes which is derived from HGT.
5. Stylized Facts Generated Through Cellular Automata Models. Case of Study: The Game of Life
CERN Document Server
Coronel-Brizio, H F; Rodriguez-Achach, M E; Stevens-Ramirez, G A
In the present work, a geometrical method to generate a two dimensional random walk by means of a bidimensional Cellular Automaton is presented. We illustrate it by means of Conway's Game of Life with periodical borders, with a large lattice of 3000 x 3000 cells. The obtained random walk is of character anomalous, and its projection to a one dimensional random walk is analyzed, showing that it presents some statistical properties similar to the so-called stylized facts observed in financial time series. We consider that the procedure presented here is important not only because of its simplicity, but also because it could help us to understand and shed light on the stylized facts formation mechanism.
6. A Field Trip to the Archaean in Search of Darwin’s Warm Little Pond
Directory of Open Access Journals (Sweden)
Bruce Damer
Full Text Available Charles Darwin’s original intuition that life began in a “warm little pond” has for the last three decades been eclipsed by a focus on marine hydrothermal vents as a venue for abiogenesis. However, thermodynamic barriers to polymerization of key molecular building blocks and the difficulty of forming stable membranous compartments in seawater suggest that Darwin’s original insight should be reconsidered. I will introduce the terrestrial origin of life hypothesis, which combines field observations and laboratory results to provide a novel and testable model in which life begins as protocells assembling in inland fresh water hydrothermal fields. Hydrothermal fields are associated with volcanic landmasses resembling Hawaii and Iceland today and could plausibly have existed on similar land masses rising out of Earth’s first oceans. I will report on a field trip to the living and ancient stromatolite fossil localities of Western Australia, which provided key insights into how life may have emerged in Archaean, fluctuating fresh water hydrothermal pools, geological evidence for which has recently been discovered. Laboratory experimentation and fieldwork are providing mounting evidence that such sites have properties that are conducive to polymerization reactions and generation of membrane-bounded protocells. I will build on the previously developed coupled phases scenario, unifying the chemical and geological frameworks and proposing that a hydrogel of stable, communally supported protocells will emerge as a candidate Woese progenote, the distant common ancestor of microbial communities so abundant in the earliest fossil record.
7. A Field Trip to the Archaean in Search of Darwin's Warm Little Pond. (United States)
Damer, Bruce
8. Mantle hydrous-fluid interaction with Archaean granite. (United States)
Słaby, E.; Martin, H.; Hamada, M.; Śmigielski, M.; Domonik, A.; Götze, J.; Hoefs, J.; Hałas, S.; Simon, K.; Devidal, J.-L.; Moyen, J.-F.; Jayananda, M.
Water content/species in alkali feldspars from late Archaean Closepet igneous bodies as well as growth and re-growth textures, trace element and oxygen isotope composition have been studied (Słaby et al., 2011). Both processes growth and re-growth are deterministic, however they differ showing increasing persistency in element behaviour during interaction with fluids. The re-growth process fertilized domains and didn't change their oxygen-isotope signature. Water speciation showed persistent behaviour during heating at least up to 600oC. Carbonate crystals with mantle isotope signature are associated with the recrystallized feldspar domains. Fluid-affected domains in apatite provide evidence of halide exchange. The data testify that the observed recrystallization was a high-temperature reaction with fertilized, halide-rich H2O-CO2 mantle-derived fluids of high water activity. A wet mantle being able to generate hydrous plumes, which appear to be hotter during the Archean in comparison to the present time is supposed by Shimizu et al. (2001). Usually hot fluids, which can be strongly carbonic, precede asthenospheric mantle upwelling. They are supposed to be parental to most recognized compositions, which can be derived by their immiscible separation into saline aqueous-silicic and carbonatitic members (Klein-BenDavid et al., 2007). The aqueous fractions are halogen-rich with a significant proportion of CO2. Both admixed fractions are supposed to be fertile. The Closepet granite emplaced in a major shear zone that delimitates two different terrains. Generally such shear zones, at many places, are supposed to be rooted deep into the mantle. The drain, that favoured and controlled magma ascent and emplacement, seemed to remain efficient after granite crystallization. In the southern part of the Closepet batholiths an evidence of intensive interaction of a lower crust fluid (of high CO2 activity) is provided by the extensive charnockitization of amphibolite facies (St
9. Early-life Stress Impacts the Developing Hippocampus and Primes Seizure Occurrence: cellular, molecular, and epigenetic mechanisms
Directory of Open Access Journals (Sweden)
Li-Tung eHuang
Full Text Available Early-life stress includes prenatal, postnatal, and adolescence stress. Early-life stress can affect the development of the hypothalamic-pituitary-adrenal (HPA axis, and cause cellular and molecular changes in the developing hippocampus that can result in neurobehavioral changes later in life. Epidemiological data implicate stress as a cause of seizures in both children and adults. Emerging evidence indicates that both prenatal and postnatal stress can prime the developing brain for seizures and an increase in epileptogenesis. This article reviews the cellular and molecular changes encountered during prenatal and postnatal stress, and assesses the possible link between these changes and increases in seizure occurrence and epileptogenesis in the developing hippocampus. In addititon, the priming effect of prenatal and postnatal stress for seizures and epileptogenesis is discussed. Finally, the roles of epigenetic modifications in hippocampus and HPA axis programming, early-life stress, and epilepsy are discussed.
10. Origin of giant viruses from smaller DNA viruses not from a fourth domain of cellular life. (United States)
Yutin, Natalya; Wolf, Yuri I; Koonin, Eugene V
The numerous and diverse eukaryotic viruses with large double-stranded DNA genomes that at least partially reproduce in the cytoplasm of infected cells apparently evolved from a single virus ancestor. This major group of viruses is known as Nucleocytoplasmic Large DNA Viruses (NCLDV) or the proposed order Megavirales. Among the "Megavirales", there are three groups of giant viruses with genomes exceeding 500kb, namely Mimiviruses, Pithoviruses, and Pandoraviruses that hold the current record of viral genome size, about 2.5Mb. Phylogenetic analysis of conserved, ancestral NLCDV genes clearly shows that these three groups of giant viruses have three distinct origins within the "Megavirales". The Mimiviruses constitute a distinct family that is distantly related to Phycodnaviridae, Pandoraviruses originate from a common ancestor with Coccolithoviruses within the Phycodnaviridae family, and Pithoviruses are related to Iridoviridae and Marseilleviridae. Maximum likelihood reconstruction of gene gain and loss events during the evolution of the "Megavirales" indicates that each group of giant viruses evolved from viruses with substantially smaller and simpler gene repertoires. Initial phylogenetic analysis of universal genes, such as translation system components, encoded by some giant viruses, in particular Mimiviruses, has led to the hypothesis that giant viruses descend from a fourth, probably extinct domain of cellular life. The results of our comprehensive phylogenomic analysis of giant viruses refute the fourth domain hypothesis and instead indicate that the universal genes have been independently acquired by different giant viruses from their eukaryotic hosts.
11. Life under Climate Change Scenarios: Sea Urchins’ Cellular Mechanisms for Reproductive Success
Directory of Open Access Journals (Sweden)
Desislava Bögner
Full Text Available Ocean Acidification (OA represents a major field of research and increased efforts are being made to elucidate its repercussions on biota. Species survival is ensured by successful reproduction, which may be threatened under detrimental environmental conditions, such as OA acting in synergy with other climate change related stressors. Achieving successful gametogenesis, fertilization, and the development of larvae into healthy juveniles and adults is crucial for the perpetuation of species and, thus, ecosystems’ functionality. The considerable vulnerability of the abovementioned developmental stages to the adverse conditions that future OA may impose has been shown in many species, including sea urchins which are commonly used due to the feasibility of their maintenance in captivity and the great amount of gametes that a mature adult is able to produce. In the present review, the latest knowledge about the impact of OA on various stages of the life cycle of sea urchins is summarized with remarks on the possible impact of other stressors. The cellular physiology of the gametes before, at fertilization and, at early development, is extensively described with a focus on the complex enzymatic machinery and the intracellular pH (pHi and Ca2+ homeostasis for their vulnerability when facing adverse conditions such as acidification, temperature variations, or hypoxia.
12. Quality Matters: Systematic Analysis of Endpoints Related to "Cellular Life" in Vitro Data of Radiofrequency Electromagnetic Field Exposure. (United States)
Simkó, Myrtill; Remondini, Daniel; Zeni, Olga; Scarfi, Maria Rosaria
Possible hazardous effects of radiofrequency electromagnetic fields (RF-EMF) at low exposure levels are controversially discussed due to inconsistent study findings. Therefore, the main focus of the present study is to detect if any statistical association exists between RF-EMF and cellular responses, considering cell proliferation and apoptosis endpoints separately and with both combined as a group of "cellular life" to increase the statistical power of the analysis. We searched for publications regarding RF-EMF in vitro studies in the PubMed database for the period 1995-2014 and extracted the data to the relevant parameters, such as cell culture type, frequency, exposure duration, SAR, and five exposure-related quality criteria. These parameters were used for an association study with the experimental outcome in terms of the defined endpoints. We identified 104 published articles, from which 483 different experiments were extracted and analyzed. Cellular responses after exposure to RF-EMF were significantly associated to cell lines rather than to primary cells. No other experimental parameter was significantly associated with cellular responses. A highly significant negative association with exposure condition-quality and cellular responses was detected, showing that the more the quality criteria requirements were satisfied, the smaller the number of detected cellular responses. According to our knowledge, this is the first systematic analysis of specific RF-EMF bio-effects in association to exposure quality, highlighting the need for more stringent quality procedures for the exposure conditions.
13. Using a Virtual Tissue Culture System to Assist Students in Understanding Life at the Cellular Level (United States)
McLauglin, Jacqueline S.; Seaquist, Stephen B.
In every biology course ever taught in the nation's classrooms, and in every biology book ever published, students are taught about the "cell." The cell is as fundamental to biology as the atom is to chemistry. Truly, everything an organism does occurs fundamentally at the cellular level. Beyond memorizing the cellular definition, students are not…
14. Conway's game of life is a near-critical metastable state in the multiverse of cellular automata (United States)
Reia, Sandro M.; Kinouchi, Osame
Conway's cellular automaton Game of Life has been conjectured to be a critical (or quasicritical) dynamical system. This criticality is generally seen as a continuous order-disorder transition in cellular automata (CA) rule space. Life's mean-field return map predicts an absorbing vacuum phase (ρ =0) and an active phase density, with ρ =0.37, which contrasts with Life's absorbing states in a square lattice, which have a stationary density of ρ2D≈0.03. Here, we study and classify mean-field maps for 6144 outer-totalistic CA and compare them with the corresponding behavior found in the square lattice. We show that the single-site mean-field approach gives qualitative (and even quantitative) predictions for most of them. The transition region in rule space seems to correspond to a nonequilibrium discontinuous absorbing phase transition instead of a continuous order-disorder one. We claim that Life is a quasicritical nucleation process where vacuum phase domains invade the alive phase. Therefore, Life is not at the "border of chaos," but thrives on the "border of extinction."
15. Origins of life: An operational definition (United States)
Fleischaker, Gail Raney
Two very different models are used for the scientific study of life's origins: in the Troland-Muller model, life is molecular and its defining characteristic is gene function; in the Oparin-Haldane model, life is cellular and its defining characteristic is metabolic function. While each of these models implicitly defines the living, neither provides criteria by which theemergence of life could be recognized in the laboratory. Anoperational definition of the living makes explicit the system logic of metabolic self-production: (1) that whatever form it may take, life is a function of its biochemical processes; (2) that no single biochemical process has integrity apart from an entire network of processes; (3) that a network of processes can have continuity only by being enclosed within a boundary structure, i.e., by the selective partition of a microenvironment as a domain for the bioenergetic-biosynthetic network; and (4) that life is a single phenomenon, distinct in its continuity of capture and storage of energy in such networks, driving the processes that produce its material constituents. This paper presentsautopoiesis as life-defining and discusses the utility of its criteria in our search for the origins of life on Earth. Enactment of the autopoietic criteria would result in aminimal cell and would demonstrate the experimental recapitulation of life's Archaean origins.
16. Geodynamic evolution of the West and Central Pilbara Craton in Western Australia : a mid-Archaean active continental margin
NARCIS (Netherlands)
Beintema, K.A.
The Archaean era lasted for about one third of the Earth's history, from ca 4.0 until 2.5 billion years ago. Because the Archaean spans such a long time, knowledge about this era is for understanding the evolution of the Earth until the present day, especially because it is the time offormation of m
Energy Technology Data Exchange (ETDEWEB)
Kartikey Gupta [Grade X, Mayura School, Jaipur (India)
Wherever we look, life can be many different forms of life, that is, uncountable varieties of animals and plants occupy the whole world today! But, where and how did it all start? The story of evolution is one of the most interesting theories ever put forward. It refers to - the way that simple and small living things eventually changed into much more functional and bigger beings, in course of time. Charles Darwin had explored this mystery, and had provided the reason- Evolution. Evolution is changing of life forms into more functional ones with respect to their changing environment. However, as odd as it may sound, evolution and extinction are closely linked. Because, the better evolved species survives, and throughout the timeline of evolution, there have been many extinction waves. They were all occurred naturally thus proving that the very process of extinction is natural. The earth has seen many variations of global temperature; it has suffered various ice ages, which had also many a times threatened to eradicate most life from the planet. But, every time life has found a way to go on. Therefore, whatever life we see today has resulted from the ongoing long process of evolution. After millions of years, finally we humans have come into existence, and today are the leading species of the world. But, we may possibly be very close to another major extinction wave, the root causes of which are both natural and man-made, but the part played by the latter is much more than the former. Global Warming has now started affecting all kinds of life on the planet, and it is our responsibility as the leading and most intellectual species to try to save our earth. A study reveals that 60% Indian people do not actually know about Global Warming, and that the number of youth aware of Global Warming and Its impacts are much more than the number of adults. About 75% Indians believe that it is the sole responsibility of the government to solve the problems related to
18. Archaean Greenstone Belt Architecture and Stratigraphy: are Comparisons With Ophiolites and Oceanic Plateaux Valid? (United States)
Bedard, J. H.; Bleeker, W.; Leclerc, F.
Archaean greenstone belts and coeval plutonic belts (dominated by TTGs, tonalite-tronhjemite-granodiorite), are commonly interpreted to represent assembled fragments of oceanic crust, oceanic plateaux or juvenile arc terranes, variably reworked by Archaean orogenic processes related to the operation of plate tectonics. However, many of the lava successions that have been interpreted to represent accreted oceanic plateaux are demonstrably ensialic, can be correlated over long distances along-strike, have depositional contacts onto older continental crustal rocks, show tholeiitic to calc-alkaline cyclicity, and have isotopic signatures indicating assimilation of older felsic crust. Inferred Archaean ophiolites do not have sheeted dyke complexes or associated mantle rocks, and cannot be proven to be oceanic terranes formed by seafloor-spreading. Archaean supracrustal sequences are typically dominated by tholeiitic to komatiitic lavas, typically interpreted to represent the products of decompression melting of mantle plumes. Subordinate proportions of andesites, dacites and rhyolites also occur, and these, together with the coeval TTGs, are generally interpreted to represent arc magmas. In the context of uniformitarian interpretations, the coeval emplacement of putative arc- and plume-related magmas requires extremely complex geodynamic scenarios. However, the relative rarity of the archetypal convergent margin magma type (andesite) in Archaean sequences, and the absence of Archaean blueschists, ultra-high-pressure terranes, thrust and fold belts, core complexes and ophiolites, along with theoretical arguments against Archaean subduction, together imply that Archaean cratonic crust was not formed through uniformitarian plate-tectonic processes. A simpler interpretation involves soft intraoceanic collisions of thick (30-50km), plume-related, basaltic-komatiitic oceanic plateaux, with ongoing mafic magmatism leading to anatexis of the hydrated plateau base to generate
19. Manganiferous minerals of the epidote group from the Archaean basement of West Greenland
DEFF Research Database (Denmark)
Katerinopoulou, Anna; Balic Zunic, Tonci; Kolb, Jochen
The chemical compositions and crystal structures of Mn3+-containing minerals from the epidote group in Greenland rocks are investigated and described in detail. They occur in hydrothermally altered Archaean mafic sequences within the gneissic complex of the North Atlantic craton of West Greenland...
20. Record of mid-Archaean subduction from metamorphism in the Barberton terrain, South Africa. (United States)
Moyen, Jean-François; Stevens, Gary; Kisters, Alexander
1. Posterior tail development in the salamander Eurycea cirrigera: exploring cellular dynamics across life stages. (United States)
Vaglia, Janet L; Fornari, Chet; Evans, Paula K
During embryogenesis, the body axis elongates and specializes. In vertebrate groups such as salamanders and lizards, elongation of the posterior body axis (tail) continues throughout life. This phenomenon of post-embryonic tail elongation via addition of vertebrae has remained largely unexplored, and little is known about the underlying developmental mechanisms that promote vertebral addition. Our research investigated tail elongation across life stages in a non-model salamander species, Eurycea cirrigera (Plethodontidae). Post-embryonic addition of segments suggests that the tail tip retains some aspects of embryonic cell/tissue organization and gene expression throughout the life cycle. We describe cell and tissue differentiation and segmentation of the posterior tail using serial histology and expression of the axial tissue markers, MF-20 and Pax6. Embryonic expression patterns of HoxA13 and C13 are shown with in situ hybridization. Tissue sections reveal that the posterior spinal cord forms via cavitation and precedes development of the underlying cartilaginous rod after embryogenesis. Post-embryonic tail elongation occurs in the absence of somites and mesenchymal cells lateral to the midline express MF-20. Pax6 expression was observed only in the spinal cord and some mesenchymal cells of adult Eurycea tails. Distinct temporal and spatial patterns of posterior Hox13 gene expression were observed throughout embryogenesis. Overall, important insights to cell organization, differentiation, and posterior Hox gene expression may be gained from this work. We suggest that further work on gene expression in the elongating adult tail could shed light on mechanisms that link continual axial elongation with regeneration.
2. Inseparable tandem: evolution chooses ATP and Ca2+ to control life, death and cellular signalling. (United States)
Plattner, Helmut; Verkhratsky, Alexei
From the very dawn of biological evolution, ATP was selected as a multipurpose energy-storing molecule. Metabolism of ATP required intracellular free Ca(2+) to be set at exceedingly low concentrations, which in turn provided the background for the role of Ca(2+) as a universal signalling molecule. The early-eukaryote life forms also evolved functional compartmentalization and vesicle trafficking, which used Ca(2+) as a universal signalling ion; similarly, Ca(2+) is needed for regulation of ciliary and flagellar beat, amoeboid movement, intracellular transport, as well as of numerous metabolic processes. Thus, during evolution, exploitation of atmospheric oxygen and increasingly efficient ATP production via oxidative phosphorylation by bacterial endosymbionts were a first step for the emergence of complex eukaryotic cells. Simultaneously, Ca(2+) started to be exploited for short-range signalling, despite restrictions by the preset phosphate-based energy metabolism, when both phosphates and Ca(2+) interfere with each other because of the low solubility of calcium phosphates. The need to keep cytosolic Ca(2+) low forced cells to restrict Ca(2+) signals in space and time and to develop energetically favourable Ca(2+) signalling and Ca(2+) microdomains. These steps in tandem dominated further evolution. The ATP molecule (often released by Ca(2+)-regulated exocytosis) rapidly grew to be the universal chemical messenger for intercellular communication; ATP effects are mediated by an extended family of purinoceptors often linked to Ca(2+) signalling. Similar to atmospheric oxygen, Ca(2+) must have been reverted from a deleterious agent to a most useful (intra- and extracellular) signalling molecule. Invention of intracellular trafficking further increased the role for Ca(2+) homeostasis that became critical for regulation of cell survival and cell death. Several mutually interdependent effects of Ca(2+) and ATP have been exploited in evolution, thus turning an originally
Directory of Open Access Journals (Sweden)
Javad Aramideh
Full Text Available Wireless sensor networks have attracted attention of researchers considering their abundant applications. One of the important issues in this network is limitation of energy consumption which is directly related to life of the network. One of the main works which have been done recently to confront with this problem is clustering. In this paper, an attempt has been made to present clustering method which performs clustering in two stages. In the first stage, it specifies candidate nodes for being head cluster with fuzzy method and in the next stage, the node of the head cluster is determined among the candidate nodes with cellular learning automata. Advantage of the clustering method is that clustering has been done based on three main parameters of the number of neighbors, energy level of nodes and distance between each node and sink node which results in selection of the best nodes as a candidate head of cluster nodes. Connectivity of network is also evaluated in the second part of head cluster determination. Therefore, more energy will be stored by determining suitable head clusters and creating balanced clusters in the network and consequently, life of the network increases.
4. Ancient micrometeorites suggestive of an oxygen-rich Archaean upper atmosphere (United States)
Tomkins, Andrew G.; Bowlt, Lara; Genge, Matthew; Wilson, Siobhan A.; Brand, Helen E. A.; Wykes, Jeremy L.
It is widely accepted that Earth’s early atmosphere contained less than 0.001 per cent of the present-day atmospheric oxygen (O2) level, until the Great Oxidation Event resulted in a major rise in O2 concentration about 2.4 billion years ago. There are multiple lines of evidence for low O2 concentrations on early Earth, but all previous observations relate to the composition of the lower atmosphere in the Archaean era; to date no method has been developed to sample the Archaean upper atmosphere. We have extracted fossil micrometeorites from limestone sedimentary rock that had accumulated slowly 2.7 billion years ago before being preserved in Australia’s Pilbara region. We propose that these micrometeorites formed when sand-sized particles entered Earth’s atmosphere and melted at altitudes of about 75 to 90 kilometres (given an atmospheric density similar to that of today). Here we show that the FeNi metal in the resulting cosmic spherules was oxidized while molten, and quench-crystallized to form spheres of interlocking dendritic crystals primarily of magnetite (Fe3O4), with wüstite (FeO)+metal preserved in a few particles. Our model of atmospheric micrometeorite oxidation suggests that Archaean upper-atmosphere oxygen concentrations may have been close to those of the present-day Earth, and that the ratio of oxygen to carbon monoxide was sufficiently high to prevent noticeable inhibition of oxidation by carbon monoxide. The anomalous sulfur isotope (Δ33S) signature of pyrite (FeS2) in seafloor sediments from this period, which requires an anoxic surface environment, implies that there may have been minimal mixing between the upper and lower atmosphere during the Archaean.
5. Implications of a reducing and warm (not hot) Archaean ambient mantle for ancient element cycles (United States)
Aulbach, Sonja
There is considerable uncertainty regarding the oxygen partial pressure (fO2) and potential temperature (TP) of the ambient convecting mantle throughout Earth's history. Rare Archaean eclogite suites have elemental and isotopic compositions indicative of formation of crustal protoliths in oceanic spreading ridges, hence unaffected by continental sources. These include some eclogite xenoliths derived from cratonic mantle lithosphere and orogenic eclogites marking the exhumation of oceanic crust at Pacific-type margins. Their compositions may retain a memory of the thermal and redox state of the Archaean convecting mantle sources that gave rise to their low-pressure protoliths. Archaean eclogites have TiO2-REE relationships consistent with fractional crystallisation of olivine±plagioclase and cpx during formation of picritic protoliths from a melt that separated from a garnet-free peridotite source, implying intersection of the solidus at ≤2.5 to 3.0 GPa [1]. Low melt fractions (oceanic spreading ridges [7] in the Archaean, with implications for the composition and oxygenation of the palaeo-atmosphere. Subsequent subduction of such reducing oceanic crust must have also affected the cycling of volatile elements (soluble instead of molecular species [9]) and of redox-sensitive ore-forming metals [10] during metamorphic dehydration and melting reactions. [1] Aulbach&Viljoen (2015) Earth Planet Sci Lett 431; [2] Herzberg et al. (2010) Earth Planet Sci Lett 292; [3] Sizova et al. (2010) Lithos 116; [4] Rey&Coltice (2008) Geology 36; [5] Dasgupta (2013) RIMG 75; [6] Magni et al. (2014) G3 15; [7] Li&Lee (2004) EPSL 228; [8] Stagno et al. (2013) Nature 493; [9] Sverjensky et al. (2014) Nat Geosci 7; [10] Evans & Tomkins (2011) Earth Planet Sci Lett 308.
6. Palaeoproterozoic prograde metasomatic-metamorphic overprint zones in Archaean tonalitic gneisses, eastern Finland
Directory of Open Access Journals (Sweden)
Pajunen, M.
Full Text Available Several occurrences of coarse-grained kyanite rocks are exposed in the Archaean area of eastern Finland in zones trending predominantly northwest-southeast that crosscut all the Archaean structures and, locally, the Palaeoproterozoic metadiabase dykes, too. Their metamorphic history illustrates vividly Palaeoproterozoic reactivation of the Archaean craton. The early-stage kyanite rocks were formed within the framework of ductile shearing or by penetrative metasomatism in zones of mobile brecciation. Static-state coarse-grained mineral growth during the ongoing fluid activity covered the early foliated fabrics, and metasomatic zoning developed. The early-stage metasomatism was characterized by Si, Ca and alkali leaching. The late-stage structures are dilatational semi-brittle faults and fractures with unstrained, coarse-grained fabrics often formed by metasomatic reactions displaying Mg enrichment along grain boundaries. Metamorphism proceeded from the low-T early-stage Chl-Ms-Qtz, Ky/And-St, eventually leading to the high-T late-stage Crd-Sil assemblages. The thermal peak, at 600-620°C/4-5 kbar, of the process is dated to 1852+2 Ma (U-Pb on xenotime. Al-silicate growth successions in different locations record small variations in the Palaeoproterozoic clockwise P-T paths. Pressure decreased by c. 1 kbar between the early and late stage, i.e. some exhumation had occurred. Fluid composition also changed during the progression, from saline H2O to CO2, rich. Weak retrograde features of high-T phases indicate a rapid cooling stage and termination of fluid activity. The early-stage Ky-St assemblages resemble those described from nearby Palaeoproterozoic metasediments in the Kainuu and North Karelia Schist Belts, where the metamorphic peak was achieved late with respect to Palaeoproterozoic structures. The static Ky-St metamorphism in kyanite rocks was generated by fluid-induced leaching processes at elevated T during the post-orogenic stage after
7. Impact of comorbid anxiety and depression on quality of life and cellular immunity changes in patients with digestive tract cancers
Institute of Scientific and Technical Information of China (English)
Fu-Ling Zhou; Wang-Gang Zhang; Yong-Chang Wei; Kang-Ling Xu; Ling-Yun Hui; Xu-Sheng Wang; Ming-Zhong Li
AIM: A study was performed to investigate the impact of comorbid anxiety and depression (CAD) on quality of life (QOL) and cellular immunity changes in patients with digestive tract cancers.METHODS: One hundred and fifty-six cases of both sexes with cancers of the digestive tract admitted between March 2001 and February 2004 in the Department of Medical Oncology, First Affiliated Hospital of Xi'an Jiaotong University were randomly enrolled in the study. Depressive and anxiety disorder diagnoses were assessed by using the Structured Clinical Interview for DSM-Ⅳ. All adult patients were evaluated with the Hamilton depressive scale (HAMD, the 24-item version), the Hamilton anxiety scale (HAMA, a modified 14-item version), quality of life questionnaire-core 30 (QLQ-C30), social support rating scale (SSRS), simple coping style questionnaire (SCSQ), and other questionnaires, respectively. In terms of HAMD ≥ 20 and HAMA ≥ 14, the patients were categorized, including CAD (n = 31) in group A, anxiety disorder (n = 23) in group B,depressive disorder (n = 37) in group C, and non-disorder (n = 65) in group D. Immunological parameters such as T-lymphocyte subsets and natural killer (NK) cell activities in peripheral blood were determined and compared among the four groups.RESULTS: The incidence of CAD was 21.15% in patients with digestive tract cancers. The average scores of social support was 43.67±7.05 for 156 cases, active coping 20.34±7.33, and passive coping 9.55±5.51. Compared with group D, subjective support was enhanced slightly in group A, but social support, objective support, and utilization of support reduced, especially utilization of support with significance (6.16 vs 7.80, P<0.05); total scores of active coping decreased, while passive coping reversed; granulocytes proliferated, monocytes declined,and lymphocytes declined significantly (32.87 vs 34.00,P<0.05); moreover, the percentage of CD3, CD4, CD8and CD56 in T lymphocyte subsets was in lower
8. The effect of thicker oceanic crust in the Archaean on the growth of continental crust through time (United States)
Wilks, M. E.
Present crustal evolution models fail to account for the generation of the large volume of continental crust in the required time intervals. All Archaean plate tectonic models, whether invoking faster spreading rates, similar to today's spreading rates, or longer ridge lengths, essentially propose that continental crust has grown by island arc accretion due to the subduction of oceanic crust. The petrological differences that characterize the Archaean from later terrains result from the subduction of hotter oceanic crust into a hotter mantle. If the oceanic crust was appreciably thicker in the Archaean, as geothermal models would indicate, this thicker crust is surely going to have an effect on tectonic processes. A more valid approach is to compare the possible styles of convergence of thick oceanic crust with modern convergence zones. The best modern analog occurs where thick continental crust is colliding with thick continental crust. Oceanic crustal collision on the scale of the present-day Himalayan continental collision zone may have been a frequent occurrence in the Archaean, resulting in extensive partial melting of the hydrous underthrust oceanic crust to produce voluminous tonalite melts, leaving a depleted stabilized basic residuum. Present-day island arc accretion may not have been the dominant mechanism for the growth of the early Archaean crust.
9. Evidence for recycled Archaean oceanic mantle lithosphere in the Azores plume. (United States)
Schaefer, Bruce F; Turner, Simon; Parkinson, Ian; Rogers, Nick; Hawkesworth, Chris
10. Structural and Metamorphic Evolution of the Archaean High-pressure Granulite in Datong-Huaian Area, North China
NARCIS (Netherlands)
Zhang, J.
The Archaean granulite terrain in the Datong-Huaian area, north China, comprises a basement complex of fe lsic and mafic granulite (TTG gneiss), overlain by a sedimentary sequence dominated by metapelite and metapsammite (khondalite series). Both lithological associations are separated by a tectonic
11. Geochemical and biologic constraints on the Archaean atmosphere and climate – A possible solution to the faint early Sun paradox
DEFF Research Database (Denmark)
Rosing, Minik Thorleif; Brid, D. K.; Sleep, N. H.;
There is ample geological evidence that Earth’s climate resembled the present during the Archaean, despite a much lower solar luminosity. This was cast as a paradox by Sagan and Mullen in 1972. Several solutions to the paradox have been suggested, mostly focusing on adjustments of the radiative p...
12. Archaean asteroid impacts, banded iron formations and MIF-S anomalies: A discussion (United States)
Glikson, Andrew
The origin of mass-independent fractionation (MIF-S) of sulphur isotopes ( δ33S) recorded in sediments older than 2.45 Ga is widely interpreted in terms of UV-triggered reactions under oxygen-poor ozone-depleted atmosphere conditions (Farquhar, J., Bao, H., Thiemens, M. [2000] Science, 289, 756; Farquhar, J., Peters, M., Johnston, D.T., Strauss, H., Masterson, A., Wiechert, U., Kaufman, A.J. [2007] Nature, 449, 706-709; Farquhar, J., Wing, B.A. [2003] Earth Planet. Sci. Lett., 213, 1-13; Kaufman, A.J., Johnston, D.T., Farquhar, J., Masterson, A.L., Lyons, T.W., Bates, S., Anbar, A.D., Arnold, G.L., Garvin, J., Buick, R. [2007a] Science, 317, 1900-1903; Kaufman, A.J., Farquhar, J., Johnston, D.T., Lyons, T.W., Arnold, G.L., Anbar, A. [2007b] Deep Time Drilling Project of the NASA Astrobiology Drilling Program). Observed mid-Archaean variability of MIF-S signatures raises questions regarding the extent of atmospheric anoxia (Ohmoto, H., Watanabe, Y., Ikemi, H., Poulson, H.R., Taylor, B. [2006] Nature, 406, 908-991; Farquhar et al., 2007). Late Archaean (˜2.7-2.5 Ga) and mid-Archaean (˜3.2 Ga) sequences in the Pilbara Craton (Western Australia) and Kaapvaal Craton (South Africa), in which MIF-S data were measured, contain asteroid impact ejecta units dated as 2.48, 2.56, 2.63, 3.24, 3.26 and 3.47 Ga old (Lowe, D.R., Byerly, G.R., Kyte, T., Shukolyukov, A., Asaro, F., Krull, A. [2003] Astrobiology, 3, 7-48; Simonson, B.M., Hassler, S.W. [1997] Aust. J. Earth Sci., 44, 37-48; Simonson, B.M., Glass, B.P. [2004] Ann. Rev. Earth Planet. Sci., 32, 329-361; Glikson, A.Y. [2004] Astrobiology, 4, 19-50; Glikson, A.Y. [2006] Earth Planet. Sci. Lett., 246, 149-160; Glikson, A.Y. [2008] Earth Planet. Sci. Lett., 267, 558-570). Mass balance calculations based on iridium and 53Cr/ 52Cr isotopic anomalies (Byerly, G.R., Lowe, D.R. [1994] Geochim. Cosmochim. Acta, 58, 3469-3486; Kyte, F.T., Shukloyukov, A., Lugmair, G.W., Lowe, D.R., Byerly, G.R. [2003] Geology, 31, 283-286) and
13. Deep origin and hot melting of an Archaean orogenic peridotite massif in Norway. (United States)
Spengler, Dirk; van Roermund, Herman L M; Drury, Martyn R; Ottolini, Luisa; Mason, Paul R D; Davies, Gareth R
The buoyancy and strength of sub-continental lithospheric mantle is thought to protect the oldest continental crust (cratons) from destruction by plate tectonic processes. The exact origin of the lithosphere below cratons is controversial, but seems clearly to be a residue remaining after the extraction of large amounts of melt. Models to explain highly melt-depleted but garnet-bearing rock compositions require multi-stage processes with garnet and clinopyroxene possibly of secondary origin. Here we report on orogenic peridotites (fragments of cratonic mantle incorporated into the crust during continent-continent plate collision) from Otrøy, western Norway. We show that the peridotites underwent extensive melting during upwelling from depths of 350 kilometres or more, forming a garnet-bearing cratonic root in a single melting event. These peridotites appear to be the residue after Archaean aluminium depleted komatiite magmatism.
14. A proposal for formation of Archaean stromatolites before the advent of oxygenic photosynthesis
Directory of Open Access Journals (Sweden)
John Frederick Allen
15. Oxygen produced by cyanobacteria in simulated Archaean conditions partly oxidizes ferrous iron but mostly escapes-conclusions about early evolution. (United States)
Rantamäki, Susanne; Meriluoto, Jussi; Spoof, Lisa; Puputti, Eeva-Maija; Tyystjärvi, Taina; Tyystjärvi, Esa
The Earth has had a permanently oxic atmosphere only since the great oxygenation event (GOE) 2.3-2.4 billion years ago but recent geochemical research has revealed short periods of oxygen in the atmosphere up to a billion years earlier before the permanent oxygenation. If these "whiffs" of oxygen truly occurred, then oxygen-evolving (proto)cyanobacteria must have existed throughout the Archaean aeon. Trapping of oxygen by ferrous iron and other reduced substances present in Archaean oceans has often been suggested to explain why the oxygen content of the atmosphere remained negligible before the GOE although cyanobacteria produced oxygen. We tested this hypothesis by growing cyanobacteria in anaerobic high-CO2 atmosphere in a medium with a high concentration of ferrous iron. Microcystins are known to chelate iron, which prompted us also to test the effects of microcystins and nodularins on iron tolerance. The results show that all tested cyanobacteria, especially nitrogen-fixing species grown in the absence of nitrate, and irrespective of the ability to produce cyanotoxins, were iron sensitive in aerobic conditions but tolerated high concentrations of iron in anaerobicity. This result suggests that current cyanobacteria would have tolerated the high-iron content of Archaean oceans. However, only 1 % of the oxygen produced by the cyanobacterial culture was trapped by iron, suggesting that large-scale cyanobacterial photosynthesis would have oxygenated the atmosphere even if cyanobacteria grew in a reducing ocean. Recent genomic analysis suggesting that ability to colonize seawater is a secondary trait in cyanobacteria may offer a partial explanation for the sustained inefficiency of cyanobacterial photosynthesis during the Archaean aeon, as fresh water has always covered a very small fraction of the Earth's surface. If oxygenic photosynthesis originated in fresh water, then the GOE marks the adaptation of cyanobacteria to seawater, and the late-Proterozoic increase
16. Hydrogen and Oxygen Isotope Composition of Archaean Oceans Preserved in the ~3.8 Ga Isua Supracrustal Belt (United States)
Pope, E. C.; Rosing, M.; Bird, D. K.
The hydrogen isotope composition of Earth’s oceans is dependent on fluxes from the mantle, continental crust, surficial and groundwater reservoirs, and the incoming and outgoing flux of hydrogen from space. δD values of serpentinites from the Isua supracrustal belt in West Greenland range from -53 to -99‰. The upper limit of these values demonstrably preserves a signature of original seawater metasomatism, and gives a lower limit δD value for early Archaean oceans of -26‰ based on equilibrium fractionation. We propose that the progressive increase in δDOCEAN since this time is due to the preferential uptake of hydrogen in continent-forming minerals, and to hydrogen escape via biogenic methanogenesis. At most, 1.4x1022 mol H2 has been lost due to hydrogen escape, depending on the volume of continents already present at ca. 3.8 Ga, and oceans at this time were likely ~109 to 125% the size of modern day oceans. This upper limit suggests that atmospheric methane levels in the Archaean were less than 500ppmv, limiting the extent to which atmospheric greenhouse gases counteracted the faint early Sun. Oxygen isotope compositions from the same serpentinites (+0.1 to 5.6‰) indicate that the δ18O of Early Archaean oceans was ~ 0-4‰; similar to modern values. Based on this, we propose that low δ18O values of Archaean and Paleozoic cherts and carbonates are not a function of changing ocean isotope composition, but rather are due to isotopic exchange with shallow hydrothermal fluids on the ocean floor or during diagenesis.
17. Pre-biotic organic molecules in hydrothermal quartz veins from the Archaean Yilgarn province, Australia (United States)
Mayer, Christian; Schreiber, Ulrich; Dyker, Gerald; Kirnbauer, Thomas; Mulder, Ines; Sattler, Tobias; Schöler, Heinfried; Tubbesing, Christoph
According to a model recently published by Schreiber et al. (OLEB 2012), pre-biotic organic molecules as earliest markers for a chemical evolution have been formed in tectonic faults of the first Archaean cratons. These faults are often documented by quartz- and other hydrothermal vein mineralization. During the growth of these quartzes, small portions of hydrothermal fluids are enclosed which conserve the chemical composition of the given fluid medium. According to our model, the preconditions for the geochemical formation of organic molecules are a suitable carbon source (e.g. carbon dioxide), varying P/T conditions, and catalysts. This given, rising hydrothermal fluids such as mineral-rich water and supercritical carbon dioxide in deep faults with contacts to the upper earth mantle offer conditions which allow for reactions similar to the Fischer-Tropsch synthesis. So far, the inclusions which possibly have conserved the products of these reactions have not been analyzed for possible organic constituents. First analytical results of a Mesozoic hydrothermal quartz vein from central Germany (Taunus) reveal that several organic compounds are found in fluid inclusions. However, the true origin of these compounds is unclear due to possible contamination by adjacent Corg-rich metasediments. Therefore, we have extended the study to hydrothermal quartz veins from the Archaean Yilgarn craton, to impact-generated quartz veins of the Shoemaker-Crater as well as to hydrothermal quartz boulders from a 2.7 to 3 billion years old conglomerate near Murchison (Western Australia). In one of the samples from the conglomerate, a wide spectrum of organic compounds such as bromomethane, butane, isoprene, benzene, and toluene have been detected. The time interval between the quartz formation, its erosion and its sedimentation is unknown. Possibly, the analyzed quartz sample was formed in a hydrothermal vein long before any living cells have existed on earth. In this case, the given
18. Sink or swim? Geodynamic and petrological model constraints on the fate of Archaean primary crust (United States)
Kaus, B.; Johnson, T.; Brown, M.; VanTongeren, J. A.
Ambient mantle potential temperatures in the Archaean were significantly higher than 1500 °C, leading to a high percent of melting and generating thick MgO-rich primary crust underlain by highly residual mantle. However, the preserved volume of this crust is low suggesting much of it was recycled. Here we couple calculated phase equilibria for hydrated and anhydrous low to high MgO crust compositions and their complementary mantle residues with 2-D numerical geodynamic models to investigate lithosphere dynamics in the early Earth. We show that, with increasing ambient mantle potential temperature, the density of primary crust increases more dramatically than the density of residual mantle decreases and the base of MgO-rich primary crust becomes gravitationally unstable with respect to the underlying mantle even when fully hydrated. To study this process we use geodynamic models that include the effects of melt extraction, crust formation and depletion of the mantle in combination with laboratory-constrained dislocation and diffusion creep rheologies for the mantle. The models show that the base of the gravitationally unstable lithosphere delaminates through relatively small-scale Rayleigh-Taylor instabilities, but only if the viscosity of the mantle lithosphere is sufficiently low. Thickening of the crust above upwelling mantle and heating at the base of the crust are the main mechanisms that trigger the delamination process. Scaling laws were developed that are in good agreement with the numerical simulations and show that the key parameters that control the instability are the density contrast between crust and underlying mantle lithosphere, the thickness of the unstable layer and the effective viscosity of the upper mantle. Depending on uncertainties in the melting relations and rheology (hydrous or anhydrous) of the mantle, this process is shown to efficiently recycle the crust above potential temperatures of 1550-1600 °C. However, below these temperatures
19. Percolation of diagenetic fluids in the Archaean basement of the Franceville basin (United States)
Mouélé, Idalina Moubiya; Dudoignon, Patrick; Albani, Abderrazak El; Cuney, Michel; Boiron, Marie-Christine; Gauthier-Lafaye, François
The Palaeoproterozoic Franceville basin, Gabon, is mainly known for its high-grade uranium deposits, which are the only ones known to act as natural nuclear fission reactors. Previous work in the Kiéné region investigated the nature of the fluids responsible for these natural nuclear reactors. The present work focuses on the top of the Archaean granitic basement, specifically, to identify and date the successive alteration events that affected this basement just below the unconformity separating it from the Palaeoproterozoic basin. Core from four drill holes crosscutting the basin-basement unconformity have been studied. Dating is based on U-Pb isotopic analyses performed on monazite. The origin of fluids is discussed from the study of fluid inclusion planes (FIP) in quartz from basement granitoids. From the deepest part of the drill holes to the unconformable boundary with the basin, propylitic alteration assemblages are progressively replaced by illite and locally by a phengite + Fe chlorite ± Fe oxide assemblage. Illitic alteration is particularly strong along the sediment-granitoid contact and is associated with quartz dissolution. It was followed by calcite and anhydrite precipitation as fracture fillings. U-Pb isotopic dating outlines three successive events: a 3.0-2.9-Ga primary magmatic event, a 2.6-Ga propylitic alteration and a late 1.9-Ga diagenetic event. Fluid inclusion microthermometry suggests the circulation of three types of fluids: (1) a Na-Ca-rich diagenetic brine, (2) a moderately saline (diagenetic + meteoric) fluid, and (3) a low-salinity fluid of probable meteoric origin. These fluids are similar to those previously identified within the overlying sedimentary rocks of the Franceville basin. Overall, the data collected in this study show that the Proterozoic-Archaean unconformity has operated as a major flow corridor for fluids circulation, around 1.9 Ga. highly saline diagenetic brines; hydrocarbon-rich fluids derived from organic matter
20. Exposure of Daphnia magna to trichloroethylene (TCE) and vinyl chloride (VC): evaluation of gene transcription, cellular activity, and life-history parameters. (United States)
Houde, Magali; Douville, Mélanie; Gagnon, Pierre; Sproull, Jim; Cloutier, François
Trichloroethylene (TCE) is a ubiquitous contaminant classified as a human carcinogen. Vinyl chloride (VC) is primarily used to manufacture polyvinyl chloride and can also be a degradation product of TCE. Very few data exist on the toxicity of TCE and VC in aquatic organisms particularly at environmentally relevant concentrations. The aim of this study was to evaluate the sub-lethal effects (10 day exposure; 0.1; 1; 10 µg/L) of TCE and VC in Daphnia magna at the gene, cellular, and life-history levels. Results indicated impacts of VC on the regulation of genes related to glutathione-S-transferase (GST), juvenile hormone esterase (JHE), and the vitelline outer layer membrane protein (VMO1). On the cellular level, exposure to 0.1, 1, and 10 µg/L of VC significantly increased the activity of JHE in D. magna and TCE increased the activity of chitinase (at 1 and 10 µg/L). Results for life-history parameters indicated a possible tendency of TCE to affect the number of molts at the individual level in D. magna (p=0.051). Measurement of VG-like proteins using the alkali-labile phosphates (ALP) assay did not show differences between TCE treated organisms and controls. However, semi-quantitative measurement using gradient gel electrophoresis (213-218 kDa) indicated significant decrease in VG-like protein levels following exposure to TCE at all three concentrations. Overall, results indicate effects of TCE and VC on genes and proteins related to metabolism, reproduction, and growth in D. magna.
1. Archaean zircons in Miocene oceanic hotspot rocks establish ancient continental crust beneath Mauritius. (United States)
Ashwal, Lewis D; Wiedenbeck, Michael; Torsvik, Trond H
2. Iron isotope fractionation during microbial dissimilatory iron oxide reduction in simulated Archaean seawater. (United States)
Percak-Dennett, E M; Beard, B L; Xu, H; Konishi, H; Johnson, C M; Roden, E E
The largest Fe isotope excursion yet measured in marine sedimentary rocks occurs in shales, carbonates, and banded iron formations of Neoarchaean and Paleoproterozoic age. The results of field and laboratory studies suggest a potential role for microbial dissimilatory iron reduction (DIR) in producing this excursion. However, most experimental studies of Fe isotope fractionation during DIR have been conducted in simple geochemical systems, using pure Fe(III) oxide substrates that are not direct analogues to phases likely to have been present in Precambrian marine environments. In this study, Fe isotope fractionation was investigated during microbial reduction of an amorphous Fe(III) oxide-silica coprecipitate in anoxic, high-silica, low-sulphate artificial Archaean seawater at 30 °C to determine if such conditions alter the extent of reduction or isotopic fractionations relative to those observed in simple systems. The Fe(III)-Si coprecipitate was highly reducible (c. 80% reduction) in the presence of excess acetate. The coprecipitate did not undergo phase conversion (e.g. to green rust, magnetite or siderite) during reduction. Iron isotope fractionations suggest that rapid and near-complete isotope exchange took place among all Fe(II) and Fe(III) components, in contrast to previous work on goethite and hematite, where exchange was limited to the outer few atom layers of the substrate. Large quantities of low-δ(56)Fe Fe(II) (aqueous and solid phase) were produced during reduction of the Fe(III)-Si coprecipitate. These findings shed new light on DIR as a mechanism for producing Fe isotope variations observed in Neoarchaean and Paleoproterozoic marine sedimentary rocks.
3. An Archaean heavy bombardment from a destabilized extension of the asteroid belt. (United States)
Bottke, William F; Vokrouhlický, David; Minton, David; Nesvorný, David; Morbidelli, Alessandro; Brasser, Ramon; Simonson, Bruce; Levison, Harold F
4. Gold deposits in the late Archaean Nzega-Igunga greenstone belt, central plateau of tanzania
Energy Technology Data Exchange (ETDEWEB)
Feiss, P.G.; Siyomana, S.
2.2 m oz of gold have been produced, since 1935, from late Archaean (2480-2740 Ma) greenstone belts of the Central Plateau, Tanzania. North and east of Nzega (4/sup 0/12'S, 3/sup 0/11'E), 18% of the exposed basement, mainly Dodoman schists and granites, consists of metavolcanics and metasediments of the Nyanzian and Kavirondian Series. Four styles of mineralization are observed. 1. Stratabound quartz-gold veins with minor sulfides. Host rocks are quartz porphyry, banded iron formation (BIF), magnetite quartzite, and dense, cherty jasperite at the Sekenke and Canuck mines. The Canuck veins are on strike from BIF's in quartz-eye porphyry of the Igusule Hills. 2. Stratabound, disseminated gold in coarse-grained, crowded feldspar porphyry with lithic fragments and minor pyrite. At Bulangamilwa, the porphyry is conformable with Nyanzian-aged submarine (.) greenstone, volcanic sediment, felsic volcanics, and sericite phyllite. The deposits are on strike with BIF of the Wella Hills, which contains massive sulfide with up to 15% Pb+Zn. 3. Disseminated gold in quartz-albite metasomes in Nyanzian greenstones. At Kirondatal, alteration is associated with alaskites and feldspar porphyry dikes traceable several hundred meters into post-Dodoman diorite porphyry. Gold is with pyrite, arsenopyrite, pyrrhotite, minor chalcopyrite, and sphalerite as well as tourmalinite and silica-cemented breccias. 4. Basal Kavirondian placers in metaconglomerates containing cobbles and boulders of Dodoman and Nyanzian rocks several hundred meters up-section from the stratabound, disseminated mineralization at Bulangamilwa.
5. Genomes to Life''Center for Molecular and Cellular Systems'': A research program for identification and characterization of protein complexes.
Energy Technology Data Exchange (ETDEWEB)
Buchanan, M V.; Larimer, Frank; Wiley, H S.; Kennel, S J.; Squier, Thomas C.; Ramsey, John M.; Rodland, Karin D.; Hurst, G B.; Smith, Richard D.; Xu, Ying; Dixon, David A.; Doktycz, M J.; Colson, Steve D.; Gesteland, R; Giometti, Carol S.; Young, Mark E.; Giddings, Ralph M.
Goal 1 of Department of Energy's Genomes to Life (GTL) program seeks to identify and characterize the complete set of protein complexes within a cell. Goal 1 forms the foundation necessary to accomplish the other objectives of the GTL program, which focus on gene regulatory networks and molecular level characterization of interactions in microbial communities. Together this information would allow cells and their components to be understood in sufficient detail to predict, test, and understand the responses of a biological system to its environment. The Center for Molecular and Cellular Systems has been established to identify and characterize protein complexes using high through-put analytical technologies. A dynamic research program is being developed that supports the goals of the Center by focusing on the development of new capabilities for sample preparation and complex separations, molecular level identification of the protein complexes by mass spectrometry, characterization of the complexes in living cells by imaging techniques, and bioinformatics and computational tools for the collection and interpretation of data and formation of databases and tools to allow the data to be shared by the biological community.
6. Audio-magnetotelluric investigation of allochthonous iron formations in the Archaean Reguibat shield (Mauritania): structural and mining implications (United States)
Bronner, G.; Fourno, J. P.
The M'Haoudat range, considered as an allochthonous unit amid the strongly metamorphosed Archaean basement (Tiris Group), belongs to the Lower Proterozoic Ijil Group, weakly metamorphosed, constituted mainly by iron quartzites including red jaspers and high grade iron ore. Audio-magnetotelluric (AMT) soundings (frequency range 1-7500 HZ) were performed together with the systematic survey of the range (SNIM mining company). The non-linear least squares method was used to perform a smoothness-constrained data model. The obvious AMT resistivity contrasts between the M'Haoudat Unit (150-3500 ohm. m) and the Archaean basement (20 000 ohm. m) allow to state precisely that the two thrust surfaces, on both sides of the range, join together at a depth which increases from North-West to South-East, as the ore bodies. Inside the steeply dipping M'Haoudat Unit, the main beds of iron quartzites (1500-3500 ohm. m), schists (1000-1500 ohm. m) and hematite ores (150-300 ohm. m) were distinguished when their thickness exceeded 30 to 50 m. The existence of an hydrostatic level (1-50 ohm. m) and the steeply dipping architecture, very likely responsible for the lack of resistivity contrast on the upper part of some profiles, complicate the interpretation at high frequencies the thin layers being poorly defined.
7. Generation of continental crust in the northern part of the Borborema Province, northeastern Brazil, from Archaean to Neoproterozoic (United States)
de Souza, Zorano Sérgio; Kalsbeek, Feiko; Deng, Xiao-Dong; Frei, Robert; Kokfelt, Thomas Find; Dantas, Elton Luiz; Li, Jian-Wei; Pimentel, Márcio Martins; Galindo, Antonio Carlos
This work deals with the origin and evolution of the magmatic rocks in the area north of the Patos Lineament in the Borborema Province (BP). This northeastern segment of NE Brazil is composed of at least six different tectonic blocks with ages varying from late-Archaean to late-Palaeoproterozoic. Archaean rocks cover ca. 5% of the region. They were emplaced over a period of 700 Ma, with at least seven events of magma generation, at 3.41, 3.36, 3.25, 3.18, 3.12, 3.03, and 2.69 Ga. The rocks are subalkaline to slightly alkaline, with affinity to I- and M-type magmas; they follow trondhjemitic or potassium calc-alkaline differentiation trends. They have epsilon Nd(t) of +1.4 to -4.2 and negative anomalies for Ta-Nb, P and Ti, consistent with a convergent tectonic setting. Both subducted oceanic crust and upper mantle (depleted or metasomatised) served as sources of the magmas. After a time lapse of about 350 m y., large-scale emplacement of Paleoproterozoic units took place. These rocks cover about 50% of the region. Their geochemistry indicates juvenile magmatism with a minor contribution from crustal sources. These rocks also exhibit potassic calc-alkaline differentiation trends, again akin to I- and M-type magmas, and show negative anomalies for Ta-Nb, Ti and P. Depleted and metasomatised mantle, resulting from interaction with adakitic or trondhjemitic melts in a subduction zone setting, is interpreted to be the main source of the magmas, predominanting over crustal recycling. U-Pb ages indicate generation of plutonic rocks at 2.24-2.22 Ga (in some places at about 2.4-2.3 Ga) and 2.13-2.11 Ga, and andesitic volcanism at 2.15 Ga. Isotopic evidence indicates juvenile magmatism (epsilon Nd(t) of +2.9 to -2.9). After a time lapse of about 200 m y. a period of within-plate magmatic activity followed, with acidic volcanism (1.79 Ga) in Orós, granitic plutonism (1.74 Ga) in the Seridó region, anorthosites (1.70 Ga) and A-type granites (1.6 Ga) in the Transverse Zone
8. Orogenic gold mineralisation hosted by Archaean basement rocks at Sortekap, Kangerlussuaq area, East Greenland (United States)
Holwell, D. A.; Jenkin, G. R. T.; Butterworth, K. G.; Abraham-James, T.; Boyce, A. J.
A gold-bearing quartz vein system has been identified in Archaean basement rocks at Sortekap in the Kangerlussuaq region of east Greenland, 35 km north-northeast of the Skaergaard Intrusion. This constitutes the first recorded occurrence of Au mineralisation in the metamorphic basement rocks of east Greenland. The mineralisation can be classified as orogenic style, quartz vein-hosted Au mineralisation. Two vein types have been identified based on their alteration styles and the presence of Au mineralisation. Mineralised type 1 veins occur within sheared supracrustal units and are hosted by garnet-bearing amphibolites, with associated felsic and ultramafic intrusions. Gold is present as native Au and Au-rich electrum together with arsenopyrite and minor pyrite and chalcopyrite in thin alteration selvages in the immediate wall rocks. The alteration assemblage of actinolite-clinozoisite-muscovite-titanite-scheelite-arsenopyrite-pyrite is considered to be a greenschist facies assemblage. The timing of mineralisation is therefore interpreted as being later and separate event to the peak amphibolite facies metamorphism of the host rocks. Type 2 quartz veins are barren of mineralisation, lack significant alteration of the wall rocks and are considered to be later stage. Fluid inclusion microthermometry of the quartz reveals three separate fluids, including a high temperature ( T h = 300-350 °C), H2O-CO2-CH4 fluid present only in type 1 veins that in interpreted to be responsible for the main stage of Au deposition and sulphidic wall rock alteration. It is likely that the carbonic fluids were actually trapped at temperatures closer to 400 °C. Two other fluids were identified within both vein types, which comprise low temperature (100-200 °C) brines, with salinities of 13-25 wt% eq. NaCl and at least one generation of low salinity aqueous fluids. The sources and timings of the secondary fluids are currently equivocal but they may be related to the emplacement of
9. Multifractal spatial organisation in hydrothermal gold systems of the Archaean Yilgarn craton, Western Australia (United States)
Munro, Mark; Ord, Alison; Hobbs, Bruce
A range of factors controls the location of hydrothermal alteration and gold mineralisation in the Earth's crust. These include the broad-scale lithospheric architecture, availability of fluid sources, fluid composition and pH, pressure-temperature conditions, microscopic to macroscopic structural development, the distribution of primary lithologies, and the extent of fluid-rock interactions. Consequently, the spatial distribution of alteration and mineralization in hydrothermal systems is complex and often considered highly irregular. However, despite this, do they organize themselves in a configuration that can be documented and quantified? Wavelets, mathematical functions representing wave-like oscillations, are commonly used in digital signals analysis. Wavelet-based multifractal analysis involves incrementally scanning a wavelet across the dataset multiple times (varying its scale) and recording its degree of fit to the signal at each interval. This approach (the wavelet transform modulus maxima method) highlights patterns of self-similarity present in the dataset and addresses the range of scales over which these patterns replicate themselves (expressed by their range in 'fractal dimension'). Focusing on seven gold ore bodies in the Archaean Yilgarn craton of Western Australia, this study investigates whether different aspects of hydrothermal gold systems evolve to organize themselves spatially as multifractals. Four ore bodies were selected from the Sunrise Dam deposit (situated in the Laverton tectonic zone of the Kurnalpi terrane) in addition to the Imperial, Majestic and Salt Creek gold prospects, situated in the Yindarlgooda dome of the Mount Monger goldfield (approximately 40km due east of Kalgoorlie). The Vogue, GQ, Cosmo East and Astro ore bodies at Sunrise Dam were chosen because they exhibit different structural geometries and relationships between gold and associated host-rock alteration styles. Wavelet-based analysis was conducted on 0.5m and 1m
10. Mineralogical and geochemical characteristics of the Archaean LCT pegmatite deposit Cattlin Creek, Ravensthorpe, Western Australia (United States)
Bauer, Matthias; Dittrich, Thomas; Seifert, Thomas; Schulz, Bernhard
The LCT (lithium-cesium-tantalum) pegmatite Cattlin Creek is located about 550 km ESE of Perth, Western Australia. The complex-type, rare-element pegmatite is hosted in metamorphic rocks of the Archaean Ravensthorpe greenstone belt, which constitutes of the southern edge of the Southern Cross Terranes of the Yilgarn Craton. The deposit is currently mined for both lithium and tantalum by Galaxy Resources Limited since 2010. The pegmatitic melt intruded in a weak structural zone of crossing thrust faults and formed several pegmatite sills, of which the surface nearest mineralized pegmatite body is up to 21 m thick. The Cattlin Creek pegmatite is characterized by an extreme fractionation that resulted in the enrichment of rare elements like Li, Cs, Rb, Sn and Ta, as well as the formation of a vertical zonation expressed by distinct mineral assemblages. The border zone comprises a fine-grained mineral assemblage consisting of albite, quartz, muscovite that merges into a medium-grained wall zone and pegmatitic-textured intermediate zones. Those zones are manifested by the occurrence of megacrystic spodumene crystals with grain sizes ranging from a couple of centimeters up to several metres. The core zone represents the most fractionated part of the pegmatite and consists of lepidolite, cleavelandite, and quartz. It also exhibits the highest concentrations of Cs (0.5 wt.%), Li (0.4 wt.%), Rb (3 wt.%), Ta (0.3 wt.%) and F (4 wt.%). This zone was probably formed in the very last crystallization stage of the pegmatite and its minerals replaced earlier crystallized mineral assemblages. Moreover, the core zone hosts subordinate extremely Cs-enriched (up to 13 wt.% Cs2O) mineral species of beryl. The chemical composition of this beryl resamples that of the extreme rare beryl-variety pezzotaite. Other observed subordinate, minor and accessory minerals comprise tourmaline, garnet, cassiterite, apatite, (mangano-) columbite, tantalite, microlite (Bi-bearing), gahnite, fluorite
11. Evidence of Meso-Archaean subduction from the Torckler-Tango Layered Complex, Rauer Group, Prydz bay, East Antarctica (United States)
McCallum, C. A.; Harley, S. L.
The Archaean Torckler Tango Layered complex (TTLC) of the Rauer Group, East Antarctica, consists of a series of elongate mega-boudins that can be traced over a strike length of 7 km, enclosed within and intruded by c. 2.8 Ga homogeneous tonalitic orthogneisses. Despite later granulite facies metamorphism (860-900°C, 0.7 GPa) original igneous structures and layering features of the TTLC are very well preserved. Graded and cross stratified layering is evident, as are load-cast structures and geopetal structures. Isotopic and LILE signatures indicate that crustal contamination has been negligible and that metamorphic disturbances have been minor. As a result, the whole rock chemistry of the TTLC is considered to reflect its igneous protoliths. This whole rock geochemistry is distinctive, with high MgO (av. 15.8 wt%), high Mg# (av. 79.1) low TiO2 (av.< 0.33 wt%), and high SiO2 (av. 52.5 wt%). The TTLC can be subdivided into two geochemical groupings based upon Al2O3 and Cr abundances, which provide clear evidence for the crystal fractionation and accumulation processes active within the complex. Trace-element and REE element ratios show coherent trends. Based on its systematic major element (Al2O3/TiO2 ~40), trace element ratios Ti/Zr vs. Zr (Ti/Zr ~34-59 at Zr ~15-40 ppm), and negative HSFE anomalies, the TTLC is similar in geochemistry to both modern, neo-Proterozoic and Archaean boninitic rocks. Magmatic zircons define an intrusive age for the TTLC of ca. 3280 ± 22 Ma. HSFE ratios, and whole rock Nd isotope ratios recalculated back to this age, are consistent with a juvenile depleted source for the primary magma. The TTLC is therefore interpreted as the intrusive equivalent of a boninite, produced through the shallow melting of refractory mantle and supportive of the operation of subduction-like processes in the early-mid Archaean.
12. Cellular automata
CERN Document Server
Codd, E F
Cellular Automata presents the fundamental principles of homogeneous cellular systems. This book discusses the possibility of biochemical computers with self-reproducing capability.Organized into eight chapters, this book begins with an overview of some theorems dealing with conditions under which universal computation and construction can be exhibited in cellular spaces. This text then presents a design for a machine embedded in a cellular space or a machine that can compute all computable functions and construct a replica of itself in any accessible and sufficiently large region of t
13. Tectonic evolvement of metamorphic complexes at Jilin paleocontinental margin during the transition from late Archaean to early Proterozoic
Institute of Scientific and Technical Information of China (English)
SUN Zhongshi; DENG Jun; JIANG Yanguo; WANG Jianping; WANG Qingfei; WEI Yanguang
The kinematics and dynamical process of tectonic evolvement of metamorphic complexes at the interim from late Archaean to early Proterozoic is one of the key problems in geosciences. For the disputation on the genesis of metamorphic complexes at the margin of Jilin palaeocontinent, this paper takes the example of Banshigou region, Jilin Province to discuss the dynamical evolution of palaeocontinent during the transition from late Archaean to early Proterozoic (2600-2000 Ma). On the time sequence, from center to palaeocontinental margin, it shows a series of dynamical movements including underplating, horizontal movement, subduction, intraplate extension and separation. And its corresponding sequence of kinematical modes is as follows: vertical movement, horizontal movement, extension and shearing in contact zone,uplift-sliding movement in paleocontinental margin and interformational sliding, resulting in such tectonite sequence, tectonic gneiss, gneissic complex, gneissic complex-mylonite, mylonite and fracture cleavage-mylonite, which consist of the main body of metamorphic complexes. Their palaeostresses are: < 20, 20.40, 21.72, 28.80 and 30.8-69.8 MPa respectively. The deformational metamorphic temperature is between hornblende and low-grade greenschist facies. The general deformational characters of Jilin palaeocontinent reflect a complete dynamic system of crust evolution, which indicates that the formation of the metamorphic complexes and the tectonic evolution are altered from vertical movement to compression to extension. It also indicates a continuous tectonic transformation from deep to shallow, and from ductile to brittle. The transformation between different dynamic mechanisms not only forms tectonic rocks, but also benefits the linking up, exchange and enrichment with rock-forming minerals and ore-forming elements.This research is helpful to classifying regional tectonic events and making further study on the evolution of palaeocontinental dynamics.
14. Reactive Programming of Cellular Automata
Boussinot, Frédéric
Implementation of cellular automata using reactive programming gives a way to code cell behaviors in an abstract and modular way. Multiprocessing also becomes possible. The paper describes the implementation of cellular automata with the reactive programming language LOFT, a thread-based extension of C. Self replicating loops considered in artificial life are coded to show the interest of the approach.
15. The GH/IGF-1 axis in a critical period early in life determines cellular DNA repair capacity by altering transcriptional regulation of DNA repair-related genes: implications for the developmental origins of cancer. (United States)
Podlutsky, Andrej; Valcarcel-Ares, Marta Noa; Yancey, Krysta; Podlutskaya, Viktorija; Nagykaldi, Eszter; Gautam, Tripti; Miller, Richard A; Sonntag, William E; Csiszar, Anna; Ungvari, Zoltan
Experimental, clinical, and epidemiological findings support the concept of developmental origins of health and disease (DOHAD), suggesting that early-life hormonal influences during a sensitive period around adolescence have a powerful impact on cancer morbidity later in life. The endocrine changes that occur during puberty are highly conserved across mammalian species and include dramatic increases in circulating GH and IGF-1 levels. Importantly, patients with developmental IGF-1 deficiency due to GH insensitivity (Laron syndrome) do not develop cancer during aging. Rodents with developmental GH/IGF-1 deficiency also exhibit significantly decreased cancer incidence at old age, marked resistance to chemically induced carcinogenesis, and cellular resistance to genotoxic stressors. Early-life treatment of GH/IGF-1-deficient mice and rats with GH reverses the cancer resistance phenotype; however, the underlying molecular mechanisms remain elusive. The present study was designed to test the hypothesis that developmental GH/IGF-1 status impacts cellular DNA repair mechanisms. To achieve that goal, we assessed repair of γ-irradiation-induced DNA damage (single-cell gel electrophoresis/comet assay) and basal and post-irradiation expression of DNA repair-related genes (qPCR) in primary fibroblasts derived from control rats, Lewis dwarf rats (a model of developmental GH/IGF-1 deficiency), and GH-replete dwarf rats (GH administered beginning at 5 weeks of age, for 30 days). We found that developmental GH/IGF-1 deficiency resulted in persisting increases in cellular DNA repair capacity and upregulation of several DNA repair-related genes (e.g., Gadd45a, Bbc3). Peripubertal GH treatment reversed the radiation resistance phenotype. Fibroblasts of GH/IGF-1-deficient Snell dwarf mice also exhibited improved DNA repair capacity, showing that the persisting influence of peripubertal GH/IGF-1 status is not species-dependent. Collectively, GH/IGF-1 levels during a critical period
16. Palaeoproterozoic high-pressure granulite overprint of the Archaean continental crust: evidence for homogeneous crustal thickening (Man Rise, Ivory Coast) (United States)
Pitra, Pavel; Kouamelan, Alain N.; Ballèvre, Michel; Peucat, Jean-Jacques
The character of mountain building processes in the Palaeoproterozoic times is subject to much debate. The local observation of Barrovian-type assemblages and high-pressure granulite relics in the Man Rise (Côte d'Ivoire), led some authors to argue that Eburnean (Palaeoproterozoic) reworking of the Archaean basement was achieved by modern-style thrust-dominated tectonics (e.g., Feybesse & Milési, 1994). However, it has been suggested that crustal thickening and subsequent exhumation of high-pressure crustal rocks can be achieved by virtue of homogeneous, fold-dominated deformation of hot crustal domains even in Phanerozoic orogenic belts (e.g., Schulmann et al., 2002; 2008). We describe a mafic granulite of the Kouibli area (Archaean part of the Man Rise, western Ivory Coast) that displays a primary assemblage (M1) containing garnet, diopsidic clinopyroxene, red-brown pargasitic amphibole, plagioclase (andesine), rutile, ilmenite and quartz. This assemblage is associated with a subvertical regional foliation. Symplectites that develop at the expense of the M1 assemblage contain orthopyroxene, clinopyroxene, plagioclase (bytownite), green pargasitic amphibole, ilmenite and magnetite (M2). Multiequilibrium thermobarometric calculations and P-T pseudosections calculated with THERMOCALC suggest granulite-facies conditions of ca. 13 kbar, 850°C and <7 kbar, 700-800°C for M1 and M2, respectively. In agreement with the qualitative information obtained from reaction textures and chemical zoning of minerals, this suggests an evolution dominated by decompression accompanied by moderate cooling. A Sm-Nd garnet - whole-rock age of 2.03 Ga determined on this sample indicates that this evolution occurred during the Palaeoproterozoic. We argue that from the geodynamic point of view the observed features are best explained by homogeneous thickening of the margin of the Archaean craton, re-heated and softened due to the accretion of hot, juvenile Palaeoproterozoic crust, as
17. Geochemical Evidence for Subduction in the Early Archaean from Quartz-Carbonate-Fuchsite Mineralization, Isua Supracrustal Belt, West Greenland (United States)
Pope, E. C.; Rosing, M. T.; Bird, D. K.
Quartz, carbonate and fuchsite (chromian muscovite) is a common metasomatic assemblage observed in orogenic gold systems, both in Phanerozoic convergent margin settings, and within supracrustal and greenstone belts of Precambrian rocks. Geologic and geochemical observations in younger orogenic systems suggest that ore-forming metasomatic fluids are derived from subduction-related devolitilization reactions, implying that orogenic Au-deposits in Archaean and Proterozoic supracrustal rock suites are related to subduction-style plate tectonics beginning early in Earth history. Justification of this metasomatic-tectonic relationship requires that 1) Phanerozoic orogenic Au-deposits form in subduction-zone environments, and 2) the geochemical similarity of Precambrian orogenic deposits to their younger counterparts is the result of having the same petro-genetic origin. Hydrogen and oxygen isotope compositions of fuchsite and quartz from auriferous mineralization in the ca. 3.8 Ga Isua Supracrustal Belt (ISB) in West Greenland, in conjunction with elevated concentrations of CO2, Cr, Al, K and silica relative to protolith assemblages, suggest that this mineralization shares a common petro-tectonic origin with Phanerozoic orogenic deposits and that this type of metasomatism is a unique result of subduction-related processes. Fuchsite from the ISB has a δ18O and δD of +7.7 to +17.9% and -115 to -61%, respectively. δ18O of quartz from the same rocks is between +10.3 and +18.6%. Muscovite-quartz oxygen isotope thermometry indicates that the mineralization occurred at 560 ± 90oC, from fluids with a δD of -73 to -49% and δ18O of +8.8 to +17.2%. Calculation of isotopic fractionation during fluid-rock reactions along hypothetical fluid pathways demonstrates that these values, as well as those in younger orogenic deposits, are the result of seawater-derived fluids liberated from subducting lithosphere interacting with ultramafic rocks in the mantle wedge and lower crust
18. Cellular Telephone
Institute of Scientific and Technical Information of China (English)
Cellular phones, used in automobiles, airliners, and passenger trains, are basically low-power radiotelephones. Calls go through radio transmitters that are located within small geographical units called cells. Because each cell’s signals are too weak to interfere with those of other cells operating on the same fre-
19. Structural observations and U-Pb mineral ages from igneous rocks at the Archaean-Palaeoproterozoic boundary in the Salahmi Schist Belt, central Finland: constraints on tectonic evolution
Directory of Open Access Journals (Sweden)
Pietikäinen, K.
Full Text Available The study area in Vieremä, central Finland, contains part of Archaean-Palaeoproterozoic boundary. In the east, the area comprises Archaean gneiss and the Salahmi Schist Belt. The rocks of the schist belt are turbiditic metagreywackes, with well-preserved depositional structures, occurring as Proterozoic wedge-shaped blocks, and staurolite schists, the latter representing higher-strained and metamorphosed equivalents of the metagreywackes. In the west of the area there is an Archaean gneiss block, containing strongly elongated structures, and deformed Svecofennian supracrustal rocks, which are cut by deformed granitoids. These are juxtaposed with the schist belt. The boundaries of these tectonometamorphic blocks are narrow, highly strained mylonites and thrust zones. The metamorphic grade of the supracrustal rocks increases from east to west, the increase being stepwise across the mylonitic block boundaries. The rocks are more deformed from east to west with younger structures overprinting. In the staurolite schists of the Salahmi Schist Belt, the most prominent structure is a lineation (L2 that overprints the bedding and axial plane foliation. In Sorronmäki quarry, at the western boundary of the schist belt, this Palaeoproterozoic lineation dominates all the structures in tonalite gneiss, which gives a U-Pb age of 2731±6 Ma. Southeast of the quarry, at the same boundary, the Salahmi schists have been overturned towards the northeast, suggesting that the Archaean gneiss at Sorronmäki has been thrust towards the northeast over these rocks. In the western part of the study area, the Leppikangas granodiorite that intrudes the Svecofennian supracrustal rocks gives a U-Pb age of 1891+6 Ma. In the granodiorite, a strong lineation formed by the intersection of two foliations, which maybe L2 is associated with thrusting towards the northeast. The monazite age of the Archaean Sorronmäki gneiss is 1817+3 Ma, and the titanite age of the Svecofennian
20. Oxygen free period in the history of Earth and life in it
Directory of Open Access Journals (Sweden)
Георгій Ілліч Рудько
Full Text Available The development of Earth in the context of its formation as also emergence of the original atmosphere and hydrosphere are presented in the article. Main stages of the atmosphere evolution have occurred in the Archaean. The mechanisms of life origin, their impact on environmental development and changes are described as well. A brief description of the most ancient sediments composed by the archaebacteria and cyanobacteria is considered.
1. Constraints on ocean carbonate chemistry and p_(CO_2) in the Archaean and Palaeoproterozoic
C. L. Blättler; Kump, L.R.; Fischer, W. W.; De Paris, G.; Kasbohm, J. J.; Higgins, J A
One of the great problems in the history of Earth’s climate is how to reconcile evidence for liquid water and habitable climates on early Earth with the Faint Young Sun predicted from stellar evolution models. Possible solutions include a wide range of atmospheric and oceanic chemistries, with large uncertainties in boundary conditions for the evolution and diversification of life and the role of the global carbon cycle in maintaining habitable climates. Increased atmospheric CO_2 is a common...
2. Shear Wave Velocity Structure of Southern African Crust: Evidence for Compositional Heterogeneity within Archaean and Proterozoic Terrains
Energy Technology Data Exchange (ETDEWEB)
Kgaswane, E M; Nyblade, A A; Julia, J; Dirks, P H H M; Durrheim, R J; Pasyanos, M E
Crustal structure in southern Africa has been investigated by jointly inverting receiver functions and Rayleigh wave group velocities for 89 broadband seismic stations spanning much of the Precambrian shield of southern Africa. 1-D shear wave velocity profiles obtained from the inversion yield Moho depths that are similar to those reported in previous studies and show considerable variability in the shear wave velocity structure of the lower part of the crust between some terrains. For many of the Archaean and Proterozoic terrains in the shield, S velocities reach 4.0 km/s or higher over a substantial part of the lower crust. However, for most of the Kimberley terrain and adjacent parts of the Kheis Province and Witwatersrand terrain, as well as for the western part of the Tokwe terrain, mean shear wave velocities of {le} 3.9 km/s characterize the lower part of the crust along with slightly ({approx}5 km) thinner crust. These findings indicate that the lower crust across much of the shield has a predominantly mafic composition, except for the southwest portion of the Kaapvaal Craton and western portion of the Zimbabwe Craton, where the lower crust is intermediate-to-felsic in composition. The parts of the Kaapvaal Craton underlain by intermediate-to-felsic lower crust coincide with regions where Ventersdorp rocks have been preserved, and thus we suggest that the intermediate-to-felsic composition of the lower crust and the shallower Moho may have resulted from crustal melting during the Ventersdorp tectonomagmatic event at c. 2.7 Ga and concomitant crustal thinning caused by rifting.
3. The 3.5 Ga Siurua trondhjemite gneiss in the Archaean Pudasjärvi Granulite Belt, northern Finland
Directory of Open Access Journals (Sweden)
Tapani Mutanen
Full Text Available In the Archaean Pudasjärvi Complex the pyroxene-bearing rocks are considered to form a belt, the Pudasjärvi Granulite Belt (PGB. The major rock types of the PGB are metaigneous mafic and felsic granulites, and trondhjemite gneisses. Red alaskites, white leucogranites and trondhjemitic pegmatoids are locally abundant. Ion microprobe U-Pb analyses on zircons suggest a magmatic age of ca. 3.5 Ga for the trondhjemite gneiss in Siurua, considered the oldest rock so far identified in the Fennoscandian Shield. The old age is supported by the Sm-Nd depleted mantle model age of 3.5 Ga, and by conventional U-Pb zircon data, which have provided a minimum age of 3.32 Ga. The U-Pb sims-data on the Siurua gneiss are, however, heterogeneous and suggest several stages of zircon growth, mostly at 3.5–3.4 Ga. An inherited core in one crystal provided an age of 3.73Ga, whereas the youngest two analyses yield ages of 3.1 and 3.3 Ga. Metamorphic monazite formed in the Siurua gneiss ca. 2.66 Ga ago, roughly contemporaneously with the high-grade metamorphism recorded by zircon in a mafi c granulite. Magmatic zircons from a felsic high-grade rock provide ages of ca. 2.96 Ga, but no zircons coeval with the 2.65 Ga metamorphism were detected by ion-microprobe. As a whole the PGB seems to be a tectonic block-mosaic containing rocks with Sm-Nd crustal formation ages ranging from 3.5 to 2.8 Ga.
4. Molecular and Cellular Signaling
CERN Document Server
Beckerman, Martin
A small number of signaling pathways, no more than a dozen or so, form a control layer that is responsible for all signaling in and between cells of the human body. The signaling proteins belonging to the control layer determine what kinds of cells are made during development and how they function during adult life. Malfunctions in the proteins belonging to the control layer are responsible for a host of human diseases ranging from neurological disorders to cancers. Most drugs target components in the control layer, and difficulties in drug design are intimately related to the architecture of the control layer. Molecular and Cellular Signaling provides an introduction to molecular and cellular signaling in biological systems with an emphasis on the underlying physical principles. The text is aimed at upper-level undergraduates, graduate students and individuals in medicine and pharmacology interested in broadening their understanding of how cells regulate and coordinate their core activities and how diseases ...
5. The amalgamation of the supercontinent of North China Craton at the end of Neo-Archaean and its breakup during late Palaeoproterozoic and Meso-Proterozoic
Institute of Scientific and Technical Information of China (English)
翟明国; 卞爱国; 赵太平
The most important geological events in the formation and evolution of the North China Craton concentrate at two stages: 2 600-2 400 Ma and 2 000-1 700 Ma (briefly, we call them 2.5 Ga event and 1.8 Ga event respectively in this paper). We propose that the essences of these two events are: Several Archaean micro-continents amalgamated to form one supercontinent according to the plate tectonic principle with a small scale at about 2.5 Ga, and the supercontinent broke down by upwelling of an ancient mantle plume at about 1.8 Ga.
6. Geochemistry and petrogenesis of high-K "sanukitoids" from the Bulai pluton, Central Limpopo Belt, South Africa: Implications for geodynamic changes at the Archaean-Proterozoic boundary (United States)
Laurent, Oscar; Martin, Hervé; Doucelance, Régis; Moyen, Jean-François; Paquette, Jean-Louis
The Neoarchaean Bulai pluton is a magmatic complex intrusive in the Central Zone of the Limpopo Belt (Limpopo Province, South Africa). It is made up of large volumes of porphyritic granodiorites with subordinate enclaves and dykes of monzodioritic, enderbitic and granitic compositions. New U-Pb LA-ICP-MS dating on zircon yield pluton-emplacement ages ranging between 2.58 and 2.61 Ga. The whole pluton underwent a high-grade thermal overprint at ~ 2.0 Ga, which did not affect the whole-rock compositions for most of the major and trace-elements, as suggested by a Sm-Nd isochron built up with 16 samples and yielding an age consistent with U-Pb dating. The whole-rock major- and trace-element compositions evidence that the Bulai pluton belongs to a high-K, calc-alkaline to shoshonitic suite, as well as unequivocal affinities with "high-Ti" sanukitoids. Monzodioritic enclaves and enderbites have both "juvenile" affinities and a strongly enriched signature in terms of incompatible trace elements (LREE, HFSE and LILE), pointing to an enriched mantle source. Based on trace-element compositions, we propose the metasomatic agent at their origin to be a melt deriving from terrigenous sediments. We therefore suggest a two-step petrogenetic model for the Bulai pluton: (1) a liquid produced by melting of subducted terrigenous sediments is consumed by reactions with mantle peridotite, producing a metasomatic assemblage; (2) low-degree melting of this metasomatized mantle gives rise to Bulai mafic magmas. Such a model is supported by geochemical modelling and is consistent with previous studies concluding that sanukitoids result from interactions between slab melts and the overlying mantle wedge. Before 2.5 Ga, melting of hydrous subducted metabasalts produced large volumes of TTG (Tonalite-Trondhjemite-Granodiorite) forming most of the volume of Archaean continental crust. By constrast, our geochemical study failed in demonstrating any significant role played by melting of
7. Asteroids and Archaean crustal evolution: Tests of possible genetic links between major mantle/crust melting events and clustered extraterrestrial bombardments (United States)
Glikson, A. Y.
Since the oldest intact terrestrial rocks of ca. 4.0 Ga and oldest zircon xenocrysts of ca. 4.3 Ga measured to date overlap with the lunar late heavy bombardment, the early Precambrian record requires close reexamination vis a vis the effects of megaimpacts. The identification of microtektite-bearing horizons containing spinals of chondritic chemistry and Ir anomalies in 3.5-3.4-Ga greenstone belts provides the first direct evidence for large-scale Archaean impacts. The Archaean crustal record contains evidence for several major greenstone-granite-forming episodes where deep upwelling and adiabatic fusion of the mantle was accompanied by contemporaneous crustal anatexis. Isotopic age studies suggest evidence for principal age clusters about 3.5, 3.0, and 2.7 (+/- 0.8) Ga, relics of a ca. 3.8-Ga event, and several less well defined episodes. These peak events were accompanied and followed by protracted thermal fluctuations in intracrustal high-grade metamorphic zones. Interpretations of these events in terms of internal dynamics of the Earth are difficult to reconcile with the thermal behavior of silicate rheologies in a continuously convecting mantle regime. A triggering of these episodes by mantle rebound response to intermittent extraterrestrial asteroid impacts is supported by (1) identification of major Archaean impacts from microtektite and distal ejecta horizons marked by Ir anomalies; (2) geochemical and experimental evidence for mantle upwelling, possibly from levels as deep as the transition zone; and (3) catastrophic adiabatic melting required to generate peridotitic komatites. Episodic differentiation/accretion growth of sial consequent on these events is capable of resolving the volume problem that arises from comparisons between modern continental crust and the estimated sial produced by continuous two-stage mantle melting processes. The volume problem is exacerbated by projected high accretion rates under Archaean geotherms. It is suggested that
8. Application of Cellular Neural Networks in Real Life%应用细胞神经网络预测冰雹研究
Institute of Scientific and Technical Information of China (English)
崔金蕾; 李国东
通过对气象雷达图像的内部信息进行挖掘与分析,进行冰雹预测。采用细胞神经网络进行边缘探测提取,结合小波变换进行数据挖掘的方法,找寻其规律,得到五种系数的规律,并且验证规律的可行性,为冰雹预测提供一个较有效的方法。%Through the data mining and analysis of weather radar images of internal information, hail forecast was conducted. Edge detection was extracted by using cellular neural networks, combining wavelet transform with data mining method to find the rules; we obtain five coefficients of the rules, and verify the feasibility of the law, providing a more effective method for hail forecast.
9. Early Life on Earth: the Ancient Fossil Record (United States)
Westall, F.
The evidence for early life and its initial evolution on Earth is lin= ked intimately with the geological evolution of the early Earth. The environment of the early Earth would be considered extreme by modern standards: hot (50-80=B0C), volcanically and hydrothermally active, a= noxic, high UV flux, and a high flux of extraterrestrial impacts. Habitats = for life were more limited until continent-building processes resulted in= the formation of stable cratons with wide, shallow, continental platforms= in the Mid-Late Archaean. Unfortunately there are no records of the first appearance of life and the earliest isotopic indications of the exist= ence of organisms fractionating carbon in ~3.8 Ga rocks from the Isua greenst= one belt in Greenland are tenuous. Well-preserved microfossils and micro= bial mats (in the form of tabular and domical stromatolites) occur in 3.5-= 3.3 Ga, Early Archaean, sedimentary formations from the Barberton (South Afri= ca) and Pilbara (Australia) greenstone belts. They document life forms that = show a relatively advanced level of evolution. Microfossil morphology inclu= des filamentous, coccoid, rod and vibroid shapes. Colonial microorganism= s formed biofilms and microbial mats at the surfaces of volcaniclastic = and chemical sediments, some of which created (small) macroscopic microbi= alites such as stromatolites. Anoxygenic photosynthesis may already have developed. Carbon, nitrogen and sulphur isotopes ratios are in the r= ange of those for organisms with anaerobic metabolisms, such as methanogenesi= s, sulphate reduction and photosynthesis. Life was apparently distribute= d widely in shallow-water to littoral environments, including exposed, evaporitic basins and regions of hydrothermal activity. Biomass in t= he early Archaean was restricted owing to the limited amount of energy t= hat could be produced by anaerobic metabolisms. Microfossils resembling o= xygenic photosynthesisers, such as cyanobacteria, probably first occurred in
10. Failover in cellular automata
CERN Document Server
Kumar, Shailesh
A cellular automata (CA) configuration is constructed that exhibits emergent failover. The configuration is based on standard Game of Life rules. Gliders and glider-guns form the core messaging structure in the configuration. The blinker is represented as the basic computational unit, and it is shown how it can be recreated in case of a failure. Stateless failover using primary-backup mechanism is demonstrated. The details of the CA components used in the configuration and its working are described, and a simulation of the complete configuration is also presented.
11. Constraints on ocean carbonate chemistry and pCO2 in the Archaean and Palaeoproterozoic (United States)
Blättler, C. L.; Kump, L. R.; Fischer, W. W.; Paris, G.; Kasbohm, J. J.; Higgins, J. A.
One of the great problems in the history of Earth’s climate is how to reconcile evidence for liquid water and habitable climates on early Earth with the Faint Young Sun predicted from stellar evolution models. Possible solutions include a wide range of atmospheric and oceanic chemistries, with large uncertainties in boundary conditions for the evolution and diversification of life and the role of the global carbon cycle in maintaining habitable climates. Increased atmospheric CO2 is a common component of many solutions, but its connection to the carbon chemistry of the ocean remains unknown. Here we present calcium isotope data spanning the period from 2.7 to 1.9 billion years ago from evaporitic sedimentary carbonates that can test this relationship. These data, from the Tumbiana Formation, the Campbellrand Platform and the Pethei Group, exhibit limited variability. Such limited variability occurs in marine environments with a high ratio of calcium to carbonate alkalinity. We are therefore able to rule out soda ocean conditions during this period of Earth history. We further interpret this and existing data to provide empirical constraints for carbonate chemistry of the ancient oceans and for the role of CO2 in compensating for the Faint Young Sun.
12. Cellular capacities for high-light acclimation and changing lipid profiles across life cycle stages of the green alga Haematococcus pluvialis.
Directory of Open Access Journals (Sweden)
Baobei Wang
Full Text Available The unicellular microalga Haematococcus pluvialis has emerged as a promising biomass feedstock for the ketocarotenoid astaxanthin and neutral lipid triacylglycerol. Motile flagellates, resting palmella cells, and cysts are the major life cycle stages of H. pluvialis. Fast-growing motile cells are usually used to induce astaxanthin and triacylglycerol biosynthesis under stress conditions (high light or nutrient starvation; however, productivity of biomass and bioproducts are compromised due to the susceptibility of motile cells to stress. This study revealed that the Photosystem II (PSII reaction center D1 protein, the manganese-stabilizing protein PsbO, and several major membrane glycerolipids (particularly for chloroplast membrane lipids monogalactosyldiacylglycerol and phosphatidylglycerol, decreased dramatically in motile cells under high light (HL. In contrast, palmella cells, which are transformed from motile cells after an extended period of time under favorable growth conditions, have developed multiple protective mechanisms--including reduction in chloroplast membrane lipids content, downplay of linear photosynthetic electron transport, and activating nonphotochemical quenching mechanisms--while accumulating triacylglycerol. Consequently, the membrane lipids and PSII proteins (D1 and PsbO remained relatively stable in palmella cells subjected to HL. Introducing palmella instead of motile cells to stress conditions may greatly increase astaxanthin and lipid production in H. pluvialis culture.
13. Coupled phases and combinatorial selection in fluctuating hydrothermal pools: a scenario to guide experimental approaches to the origin of cellular life. (United States)
Damer, Bruce; Deamer, David
Hydrothermal fields on the prebiotic Earth are candidate environments for biogenesis. We propose a model in which molecular systems driven by cycles of hydration and dehydration in such sites undergo chemical evolution in dehydrated films on mineral surfaces followed by encapsulation and combinatorial selection in a hydrated bulk phase. The dehydrated phase can consist of concentrated eutectic mixtures or multilamellar liquid crystalline matrices. Both conditions organize and concentrate potential monomers and thereby promote polymerization reactions that are driven by reduced water activity in the dehydrated phase. In the case of multilamellar lipid matrices, polymers that have been synthesized are captured in lipid vesicles upon rehydration to produce a variety of molecular systems. Each vesicle represents a protocell, an "experiment" in a natural version of combinatorial chemistry. Two kinds of selective processes can then occur. The first is a physical process in which relatively stable molecular systems will be preferentially selected. The second is a chemical process in which rare combinations of encapsulated polymers form systems capable of capturing energy and nutrients to undergo growth by catalyzed polymerization. Given continued cycling over extended time spans, such combinatorial processes will give rise to molecular systems having the fundamental properties of life.
14. Protoliths of enigmatic Archaean gneisses established from zircon inclusion studies: Case study of the Caozhuang quartzite, E. Hebei, China
Directory of Open Access Journals (Sweden)
Allen P. Nutman
Full Text Available A diverse suite of Archaean gneisses at Huangbaiyu village in the North China Craton, includes rare fuchsite-bearing (Cr-muscovite siliceous rocks – known as the Caozhuang quartzite. The Caozhuang quartzite is strongly deformed and locally mylonitic, with silica penetration and pegmatite veining common. It contains abundant 3880–3600 Ma and some Palaeoarchaean zircons. Because of its siliceous nature, the presence of fuchsite and its complex zircon age distribution, it has until now been accepted as a (mature quartzite. However, the Caozhuang quartzite sample studied here is feldspathic. The shape and cathodoluminescence petrography of the Caozhuang quartzite zircons show they resemble those found in immature detrital sedimentary rocks of local provenance or in Eoarchaean polyphase orthogneisses, and not those in mature quartzites. The Caozhuang quartzite intra-zircon mineral inclusions are dominated by quartz, with lesser biotite, apatite (7% and alkali-feldspar, and most inclusions are morphologically simple. A Neoarchaean orthogneiss from near Huangbaiyu displays morphologically simple inclusions with much more apatite (73%, as is typical for fresh calc-alkaline granitoids elsewhere. Zircons were also examined from a mature conglomerate quartzite clast and an immature feldspathic sandstone of the overlying weakly metamorphosed Mesoproterozoic Changcheng System. These zircons have oscillatory zoning, showing they were sourced from igneous rocks. The quartzite clast zircons contain only rare apatite inclusions (<1%, with domains with apatite habit now occupied by intergrowths of muscovite + quartz ± Fe-oxides ± baddeleyite. We interpret that these were once voids after apatite inclusions that had dissolved during Mesoproterozoic weathering, which were then filled with clays ± silica and then weakly metamorphosed. Zircons in the immature feldspathic sandstone show a greater amount of preserved apatite (11%, but with petrographic
15. Protoliths of enigmatic Archaean gneisses established from zircon inclusion studies:Case study of the Caozhuang quartzite, E. Hebei, China
Institute of Scientific and Technical Information of China (English)
Allen P. Nutman; Ronni Maciejowski; Yusheng Wan
A diverse suite of Archaean gneisses at Huangbaiyu village in the North China Craton, includes rare fuchsite-bearing (Cr-muscovite) siliceous rocks e known as the Caozhuang quartzite. The Caozhuang quartzite is strongly deformed and locally mylonitic, with silica penetration and pegmatite veining common. It contains abundant 3880e3600 Ma and some Palaeoarchaean zircons. Because of its siliceous nature, the presence of fuchsite and its complex zircon age distribution, it has until now been accepted as a (mature) quartzite. However, the Caozhuang quartzite sample studied here is feldspathic. The shape and cathodoluminescence petrography of the Caozhuang quartzite zircons show they resemble those found in immature detrital sedimentary rocks of local provenance or in Eoarchaean polyphase orthog-neisses, and not those in mature quartzites. The Caozhuang quartzite intra-zircon mineral inclusions are dominated by quartz, with lesser biotite, apatite (7%) and alkali-feldspar, and most inclusions are morphologically simple. A Neoarchaean orthogneiss from near Huangbaiyu displays morphologically simple inclusions with much more apatite (73%), as is typical for fresh calc-alkaline granitoids elsewhere. Zircons were also examined from a mature conglomerate quartzite clast and an immature feldspathic sandstone of the overlying weakly metamorphosed Mesoproterozoic Changcheng System. These zircons have oscillatory zoning, showing they were sourced from igneous rocks. The quartzite clast zircons contain only rare apatite inclusions (<1%), with domains with apatite habit now occupied by in-tergrowths of muscovite+quartz±Fe-oxides±baddeleyite. We interpret that these were once voids after apatite inclusions that had dissolved during Mesoproterozoic weathering, which were then filled with clays±±silica and then weakly metamorphosed. Zircons in the immature feldspathic sandstone show a greater amount of preserved apatite (11%), but with petrographic evidence of replacement of
16. Cellular communication through light.
Directory of Open Access Journals (Sweden)
Daniel Fels
Full Text Available Information transfer is a fundamental of life. A few studies have reported that cells use photons (from an endogenous source as information carriers. This study finds that cells can have an influence on other cells even when separated with a glass barrier, thereby disabling molecule diffusion through the cell-containing medium. As there is still very little known about the potential of photons for intercellular communication this study is designed to test for non-molecule-based triggering of two fundamental properties of life: cell division and energy uptake. The study was performed with a cellular organism, the ciliate Paramecium caudatum. Mutual exposure of cell populations occurred under conditions of darkness and separation with cuvettes (vials allowing photon but not molecule transfer. The cell populations were separated either with glass allowing photon transmission from 340 nm to longer waves, or quartz being transmittable from 150 nm, i.e. from UV-light to longer waves. Even through glass, the cells affected cell division and energy uptake in neighboring cell populations. Depending on the cuvette material and the number of cells involved, these effects were positive or negative. Also, while paired populations with lower growth rates grew uncorrelated, growth of the better growing populations was correlated. As there were significant differences when separating the populations with glass or quartz, it is suggested that the cell populations use two (or more frequencies for cellular information transfer, which influences at least energy uptake, cell division rate and growth correlation. Altogether the study strongly supports a cellular communication system, which is different from a molecule-receptor-based system and hints that photon-triggering is a fine tuning principle in cell chemistry.
17. Mechanisms for strain localization within Archaean craton: A structural study from the Bundelkhand Tectonic Zone, north-central India (United States)
Sarkar, Saheli; Patole, Vishal; Saha, Lopamudra; Pati, Jayanta Kumar; Nasipuri, Pritam
The transformation of palaeo-continents involve breakup, dispersal and reassembly of cratonic blocks by collisional suturing that develop a network of orogenic (mobile) belts around the periphery of the stable cratons. The nature of deformation in the orogenic belt depends on the complex interaction of fracturing, plastic deformation and diffusive mass transfer. Additionally, the degree and amount of melting during regional deformation is critical as the presence of melt facilitates the rate of diffusive mass transfer and weakens the rock by reducing the effective viscosity of the deformed zone. The nature of strain localization and formation of ductile shear zones surrounding the cratonic blocks have been correlated with Proterozoic-Palaeozoic supercontinent assembly (Columbia, Rodinia and Gondwana reconstruction). Although, a pre-Columbia supercontinent termed as Kenorland has been postulated, there is no evidence that supports the notion due to lack of the presence of shear zones within the Archaean cratonic blocks. In this contribution, we present the detailed structural analysis of ductile shear zones within the Bundelkhand craton. The ductlile shear zone is termed as Bundelkhand Tectonic Zone (BTZ) that extends east-west for nearly 300 km throughout the craton with a width of two-three kilometer . In the north-central India, the Bundelkhand craton is exposed over an area of 26,000 sq. The craton is bounded by Central Indian Tectonic zone in the south, the Great Boundary fault in the west and by the rocks of Lesser Himalaya in the north. A series of tonalite-trondjhemite-granodiorite gneiss are the oldest rocks of the Bundelkhand craton that also contains a succession of metamorphosed supracrustal rocks comprising of banded iron formation, quartzite, calc-silicate and ultramafic rocks. K-feldspar bearing granites intrude the tonalite-trondjhemite-granodiorite and the supracrustal rocks during the time span of 2.1 to 2.5 Ga. The TTGs near Babina, in central
18. Archaean associations of volcanics, granulites and eclogites of the Belomorian province, Fennoscandian Shield and its geodynamic interpretation (United States)
Slabunov, Alexander
An assembly of igneous (TTG-granitoids and S-type leucogranites and calc-alkaline-, tholeiite-, kometiite-, boninite- and adakite-series metavolcanics) and metamorphic (eclogite-, moderate-pressure (MP) granulite- and MP amphibolite-facies rocks) complexes, strikingly complete for Archaean structures, is preserved in the Belomorian province of the Fennoscandian Shield. At least four Meso-Neoarchaean different-aged (2.88-2.82; 2.81-2.78; ca. 2.75 and 2.735-2.72 Ga) calc-alkaline and adakitic subduction-type volcanics were identified as part of greenstone belts in the Belomorian province (Slabunov, 2008). 2.88-2.82 and ca. 2.78 Ga fore-arc type graywacke units were identified in this province too (Bibikova et al., 2001; Mil'kevich et al., 2007). Ca.2.7 Ga volcanics were generated in extension structures which arose upon the collapse of an orogen. The occurrence of basalt-komatiite complexes, formed in most greenstone belts in oceanic plateau settings under the influence of mantle plumes, shows the abundance of these rocks in subducting oceanic slabs. Multiple (2.82-2.79; 2.78-2.76; 2.73-2.72; 2.69-2.64 Ga) granulite-facies moderate-pressure metamorphic events were identified in the Belomorian province (Volodichev, 1990; Slabunov et al., 2006). The earliest (2.82-2.79 Ga) event is presumably associated with accretionary processes upon the formation of an old continental crust block. Two other events (2.78-2.76; 2.73-2.72 Ga) are understood as metamorphic processes in suprasubduction setting. Late locally active metamorphism is attributed to the emplacement of mafic intrusions upon orogen collapse. Three groups of crustal eclogites with different age were identified in the Belomorian province: Mesoarchaean (2.88-2.86 and 2.82-2.80 Ga) eclogites formed from MORB and oceanic plateau type basalts and oceanic high-Mg rocks (Mints et al., 2011; Shchipansky at al., 2012); Neoarchaean (2.72 Ga) eclogites formed from MORB and oceanic plateau type basalts. The formation of
19. Sm-Nd data for mafic-ultramafic intrusions in the Svecofennian (1.88 Ga Kotalahti Nickel Belt, Finland – implications for crustal contamination at the Archaean/Proterozoic boundary
Directory of Open Access Journals (Sweden)
Hannu V. Makkonen
Full Text Available Sm-Nd data were determined for eight mafic-ultramafic intrusions from the Svecofennian (1.88 Ga Kotalahti Nickel Belt, Finland. The intrusions represent both mineralized and barren types and are located at varying distances from the Archaean/Proterozoic boundary.The samples for the 23 Sm-Nd isotope analyses were taken mostly from the ultramafic differentiates. Results show a range in initial εNd values at 1880 Ma from -2.4 to +2.0. No relationship can be found between the degree of Ni mineralization and initial εNd values, whilea correlation with the geological domain and country rocks is evident. The Majasaari and Törmälä intrusions, which have positive εNd values, were emplaced within the Svecofennian domain in proximity to 1.92 Ga tonalitic gneisses, which have previously yielded initialεNd values of ca. +3. In contrast, the Luusniemi intrusion, which has an εNd value of -2.4 is situated close to exposed Archaean crust. Excluding two analyses from the Rytky intrusion, all data from the Koirus N, Koirus S, Kotalahti, Rytky and Kylmälahti intrusions, withinerror limits, fall in the range -0.7 ± 0.3. The results support the concept of contamination by Archaean material in proximity to the currently exposed craton margin. The composition of the proposed parental magma for the intrusions is close to EMORB, with initialεNd values near +4.
20. Greenland from Archaean to Quaternary, Descriptive text to the 1995 Geological Map of Greenland 1:2 500 000, 2nd edition
Directory of Open Access Journals (Sweden)
Kalsbeek, Feiko
Full Text Available The geological development of Greenland spans a period of nearly 4 Ga, from Eoarchaean to the Quaternary. Greenland is the largest island on Earth with a total area of 2 166 000 km2, but only c. 410 000 km2 are exposed bedrock, the remaining part being covered by a major ice sheet (the Inland Ice reaching over 3 km in thickness. The adjacent offshore areas underlain by continental crust have an area of c. 825 000 km2. Greenland is dominated by crystalline rocks of the Precambrian shield, which formed during a succession of Archaean and Palaeoproterozoic orogenic events and stabilised as a part of the Laurentian shield about 1600 Ma ago. The shield area can be divided into three distinct types of basement provinces: (1 Archaean rocks (3200–2600 Ma old, with local older units up to >3800Ma that were almost unaffected by Proterozoic or later orogenic activity; (2 Archaean terrains reworked during the Palaeoproterozoic around 1900–1750 Ma ago; and (3 terrains mainly composed of juvenile Palaeoproterozoic rocks (2000–1750 Ma in age.Subsequent geological developments mainly took place along the margins of the shield. During the Proterozoic and throughout the Phanerozoic major sedimentary basins formed, notably in North and North-East Greenland, in which sedimentary successions locally reaching 18 km in thickness were deposited. Palaeozoic orogenic activity affected parts of these successions in the Ellesmerian fold belt of North Greenland and the East Greenland Caledonides; the latter also incorporates reworked Precambrian crystalline basement complexes. Late Palaeozoic and Mesozoic sedimentary basins developed along the continent–ocean margins in North, East and West Greenland and are now preserved both onshore and offshore. Their development was closely related to continental break-up with formation of rift basins. Initial rifting in East Greenland in latest Devonian to earliest Carboniferous time and succeeding phases culminated with the
1. Flat Cellular (UMTS) Networks
NARCIS (Netherlands)
Bosch, H.G.P.; Samuel, L.G.; Mullender, S.J.; Polakos, P.; Rittenhouse, G.
Traditionally, cellular systems have been built in a hierarchical manner: many specialized cellular access network elements that collectively form a hierarchical cellular system. When 2G and later 3G systems were designed there was a good reason to make system hierarchical: from a cost-perspective i
2. Precambrian crustal evolution and Cretaceous–Palaeogene faulting in West Greenland: A lead isotope study of an Archaean gold prospect in the Attu region, Nagssugtoqidian orogen, West Greenland
Directory of Open Access Journals (Sweden)
Stendal, Henrik
Full Text Available This paper presents a lead isotope investigation of a gold prospect south of the village Attu in the northern part of the Nagssugtoqidian orogen in central West Greenland. The Attu gold prospect is a replacement gold occurrence, related to a shear/mylonite zone along a contact between orthogneissand amphibolite within the Nagssugtoqidian orogenic belt. The mineral occurrence is small, less than 0.5 m wide, and can be followed along strike for several hundred metres. The mineral assemblage is pyrite, chalcopyrite, magnetite and gold. The host rocks to the gold prospect are granulite facies ‘brown gneisses’ and amphibolites. Pb-isotopic data on magnetite from the host rocks yield an isochron in a 207Pb/204Pb vs. 206Pb/204Pb diagram, giving a date of 3162 ± 43 Ma (MSWD = 0.5. This date is interpreted to represent the age of the rocks in question, and is older than dates obtained from rocks elsewhere within the Nagssugtoqidian orogen. Pb-isotopic data on cataclastic magnetite from the shear zone lie close to this isochron, indicating a similar origin. The Pb-isotopic compositions of the ore minerals are similar to those previously obtained from the close-by ~2650 Ma Rifkol granite, and suggest a genetic link between the emplacement of this granite and the formation of the ore minerals in the shear/mylonite zone. Consequently, the age of the gold mineralisation is interpreted tobe late Archaean.
3. Cellular automata a parallel model
CERN Document Server
Mazoyer, J
Cellular automata can be viewed both as computational models and modelling systems of real processes. This volume emphasises the first aspect. In articles written by leading researchers, sophisticated massive parallel algorithms (firing squad, life, Fischer's primes recognition) are treated. Their computational power and the specific complexity classes they determine are surveyed, while some recent results in relation to chaos from a new dynamic systems point of view are also presented. Audience: This book will be of interest to specialists of theoretical computer science and the parallelism challenge.
4. Reversible quantum cellular automata
CERN Document Server
Schumacher, B
We define quantum cellular automata as infinite quantum lattice systems with discrete time dynamics, such that the time step commutes with lattice translations and has strictly finite propagation speed. In contrast to earlier definitions this allows us to give an explicit characterization of all local rules generating such automata. The same local rules also generate the global time step for automata with periodic boundary conditions. Our main structure theorem asserts that any quantum cellular automaton is structurally reversible, i.e., that it can be obtained by applying two blockwise unitary operations in a generalized Margolus partitioning scheme. This implies that, in contrast to the classical case, the inverse of a nearest neighbor quantum cellular automaton is again a nearest neighbor automaton. We present several construction methods for quantum cellular automata, based on unitaries commuting with their translates, on the quantization of (arbitrary) reversible classical cellular automata, on quantum c...
5. Recognition of > or = 3850 Ma water-lain sediments in West Greenland and their significance for the early Archaean Earth (United States)
Nutman, A. P.; Mojzsis, S. J.; Friend, C. R.; Bada, J. L. (Principal Investigator)
A layered body of amphibolite, banded iron formation (BIF), and ultramafic rocks from the island of Akilia, southern West Greenland, is cut by a quartz-dioritic sheet from which SHRIMP zircon 206Pb/207Pb weighted mean ages of 3865 +/- 11 Ma and 3840 +/- 8 Ma (2 sigma) can be calculated by different approaches. Three other methods of assessing the zircon data yield ages of >3830 Ma. The BIFs are interpreted as water-lain sediments, which with a minimum age of approximately 3850 Ma, are the oldest sediments yet documented. These rocks provide proof that by approximately 3850 Ma (1) there was a hydrosphere, supporting the chemical sedimentation of BIF, and that not all water was stored in hydrous minerals, and (2) that conditions satisfying the stability of liquid water imply surface temperatures were similar to present. Carbon isotope data of graphitic microdomains in apatite from the Akilia island BIF are consistent with a bio-organic origin (Mojzsis et al. 1996), extending the record of life on Earth to >3850 Ma. Life and surface water by approximately 3850 Ma provide constraints on either the energetics or termination of the late meteoritic bombardment event (suggested from the lunar cratering record) on Earth.
6. Late Archaean mantle metasomatism below eastern Indian craton: Evidence from trace elements, REE geochemistry and Sr-Nd-O isotope systematics of ultramafic dykes
Indian Academy of Sciences (India)
Abhijit Roy; A Sarkar; S Jeyakumar; S K Aggrawal; M Ebihara; H Satoh
Trace, rare earth elements (REE), Rb-Sr, Sm-Nd and O isotope studies have been carried out on ultramafic (harzburgite and lherzolite) dykes belonging to the newer dolerite dyke swarms of eastern Indian craton. The dyke swarms were earlier considered to be the youngest mafic magmatic activity in this region having ages not older than middle to late Proterozoic. The study indicates that the ultramafic members of these swarms are in fact of late Archaean age (Rb-Sr isochron age 2613 ± 177 Ma, Sri ∼0.702 ± 0.004) which attests that out of all the cratonic blocks of India, eastern Indian craton experienced earliest stabilization event. Primitive mantle normalized trace element plots of these dykes display enrichment in large ion lithophile elements (LILE), pronounced Ba, Nb and Sr depletions but very high concentrations of Cr and Ni. Chondrite normalised REE plots exhibit light REE (LREE) enrichment with nearly flat heavy REE (HREE; ( HREE)N ∼ 2-3 times chondrite, (Gd/Yb)N∼1). The Nd(t) values vary from +1.23 to −3.27 whereas 18O values vary from +3.16‰ to +5.29‰ (average +3.97‰ ± 0.75‰) which is lighter than the average mantle value. Isotopic, trace and REE data together indicate that during 2.6 Ga the nearly primitive mantle below the eastern Indian Craton was metasomatised by the fluid (±silicate melt) coming out from the subducting early crust resulting in LILE and LREE enriched, Nb depleted, variable Nd, low Sri(0.702) and low 18O bearing EMI type mantle. Magmatic blobs of this metasomatised mantle were subsequently emplaced in deeper levels of the granitic crust which possibly originated due to the same thermal pulse.
7. Green Cellular - Optimizing the Cellular Network for Minimal Emission from Mobile Stations
CERN Document Server
Ezri, Doron
Wireless systems, which include cellular phones, have become an essential part of the modern life. However the mounting evidence that cellular radiation might adversely affect the health of its users, leads to a growing concern among authorities and the general public. Radiating antennas in the proximity of the user, such as antennas of mobile phones are of special interest for this matter. In this paper we suggest a new architecture for wireless networks, aiming at minimal emission from mobile stations, without any additional radiation sources. The new architecture, dubbed Green Cellular, abandons the classical transceiver base station design and suggests the augmentation of transceiver base stations with receive only devices. These devices, dubbed Green Antennas, are not aiming at coverage extension but rather at minimizing the emission from mobile stations. We discuss the implications of the Green Cellular architecture on 3G and 4G cellular technologies. We conclude by showing that employing the Green Cell...
8. Heterogeneous cellular networks
CERN Document Server
Hu, Rose Qingyang
9. Nanostructured cellular networks. (United States)
Moriarty, P; Taylor, M D R; Brust, M
Au nanocrystals spin-coated onto silicon from toluene form cellular networks. A quantitative statistical crystallography analysis shows that intercellular correlations drive the networks far from statistical equilibrium. Spin-coating from hexane does not produce cellular structure, yet a strong correlation is retained in the positions of nanocrystal aggregates. Mechanisms based on Marangoni convection alone cannot account for the variety of patterns observed, and we argue that spinodal decomposition plays an important role in foam formation.
10. Space Biology: Patterns of Life (United States)
Salisbury, Frank B.
Present knowledge about Mars is compared with past beliefs about the planet. Biological experiments that indicate life may exist on Mars are interpreted. Life patterns or biological features that might be postulated for extraterrestrial life are presented at the molecular, cellular, organism, and ecosystem levels. (DS)
11. Oceanic plateau model for continental crustal growth in the Archaean: A case study from the Kostomuksha greenstone belt, NW Baltic Shield (United States)
Samsonov, A. V.; Shchipansky, A. A.; Jochum, K. P.; Mezger, K.; Hofmann, A. W.; Puchtel, I. S.
Field studies combined with chemical and isotope data indicate that the Kostomuksha greenstone belt in the NW Baltic Shield consists of two lithotectonic terranes, one mafic igneous and the other sedimentary, separated by a major shear zone. The former contains submarine komatiite-basalt lavas and volcaniclastic lithologies, and the latter is composed of shelf-type rocks and BIF. Komatiitic and basaltic samples yield Sm-Nd and Pb-Pb isochron ages of 2843+/-39 and 2813+/-78 Ma, respectively. Their trace-element compositions resemble those of recent Pacific oceanic flood basalts with primitive-mantle normalized Nb/Th of 1.5-2.1 and Nb/La of 1.0-1.5. This is in sharp contrast with island arc and most continental magmas, which are characterized by Nb/(Th,La)N≪1. Calculated initial Nd-isotope compositions (ɛNd(T)=+2.8 to +3.4) plot close to an evolution line previously inferred for major orogens (``MOMO''), which is also consistent with the compositions of recent oceanic plateaux. The high liquidus temperatures of the komatiite magmas (1550°C) and their Al-depleted nature require an unusually hot (1770°C) mantle source for the lavas (>200°C hotter than the ambient mantle at 2.8 Ga), and are consistent with their formation in a deep mantle plume in equilibrium with residual garnet. This plume had the thermal potential to produce oceanic crust with an average thickness of ~30 km underlain by a permanently buoyant refractory lithospheric mantle keel. Nb/U ratios in the komatiites and basalts calculated on the basis of Th-U-Pb relationships range from 35 to 47 and are thus similar to those observed in modern MORB and OIB. This implies that some magma source regions of the Kostomuksha lavas have undergone a degree of continental material extraction comparable with those found in the modern mantle. The mafic terrane is interpreted as a remnant of the upper crustal part of an Archaean oceanic plateau. When the newly formed plateau reached the active continental margin
12. Epigenetics and Cellular Metabolism (United States)
Xu, Wenyi; Wang, Fengzhong; Yu, Zhongsheng; Xin, Fengjiao
13. Architected Cellular Materials (United States)
Schaedler, Tobias A.; Carter, William B.
Additive manufacturing enables fabrication of materials with intricate cellular architecture, whereby progress in 3D printing techniques is increasing the possible configurations of voids and solids ad infinitum. Examples are microlattices with graded porosity and truss structures optimized for specific loading conditions. The cellular architecture determines the mechanical properties and density of these materials and can influence a wide range of other properties, e.g., acoustic, thermal, and biological properties. By combining optimized cellular architectures with high-performance metals and ceramics, several lightweight materials that exhibit strength and stiffness previously unachievable at low densities were recently demonstrated. This review introduces the field of architected materials; summarizes the most common fabrication methods, with an emphasis on additive manufacturing; and discusses recent progress in the development of architected materials. The review also discusses important applications, including lightweight structures, energy absorption, metamaterials, thermal management, and bioscaffolds.
14. Cellular blue naevus
Directory of Open Access Journals (Sweden)
Mittal R
Full Text Available A 31-year-old man had asymptomatic, stationary, 1.5X2 cm, shiny, smooth, dark blue nodule on dorsum of right hand since 12-14 years. In addition he had developed extensive eruption of yellow to orange papulonodular lesions on extensors of limbs and buttocks since one and half months. Investigations confirmed that yellow papules were xanthomatosis and he had associated diabetes mellitus and hyperlipidaemia. Biopsy of blue nodule confirmed the clinical diagnosis of cellular blue naevus. Cellular blue naevus is rare and its association with xanthomatosis and diabetes mellitus were interesting features of above patients which is being reported for its rarity.
15. Cellular rehabilitation of photobiomodulation (United States)
Liu, Timon Cheng-Yi; Yuan, Jian-Qin; Wang, Yan-Fang; Xu, Xiao-Yang; Liu, Song-Hao
Homeostasis is a term that refers to constancy in a system. A cell in homeostasis normally functions. There are two kinds of processes in the internal environment and external environment of a cell, the pathogenic processes (PP) which disrupts the old homeostasis (OH), and the sanogenetic processes (SP) which restores OH or establishes a new homeostasis (NH). Photobiomodualtion (PBM), the cell-specific effects of low intensity monochromatic light or low intensity laser irradiation (LIL) on biological systems, is a kind of modulation on PP or SP so that there is no PBM on a cell in homeostasis. There are two kinds of pathways mediating PBM, the membrane endogenetic chromophores mediating pathways which often act through reactive oxygen species, and membrane proteins mediating pathways which often enhance cellular SP so that it might be called cellular rehabilitation. The cellular rehabilitation of PBM will be discussed in this paper. It is concluded that PBM might modulate the disruption of cellular homeostasis induced by pathogenic factors such as toxin until OH has been restored or NH has been established, but can not change homeostatic processes from one to another one.
16. Cellular Response to Irradiation
Institute of Scientific and Technical Information of China (English)
LIU Bo; YAN Shi-Wei
To explore the nonlinear activities of the cellular signaling system composed of one transcriptional arm and one protein-interaction arm, we use an irradiation-response module to study the dynamics of stochastic interactions.It is shown that the oscillatory behavior could be described in a unified way when the radiation-derived signal and noise are incorporated.
17. Cellular Automation of Galactic Habitable Zone
CERN Document Server
Vukotic, Branislav
We present a preliminary results of our Galactic Habitable Zone (GHZ) 2D probabilistic cellular automata models. The relevant time-scales (emergence of life, it's diversification and evolution influenced with the global risk function) are modeled as the probability matrix elements and are chosen in accordance with the Copernican principle to be well-represented by the data inferred from the Earth's fossil record. With Fermi's paradox as a main boundary condition the resulting histories of astrobiological landscape are discussed.
18. Volcanological constraints of Archaean tectonics (United States)
Thurston, P. C.; Ayres, L. D.
Volcanological and trace element geochemical data can be integrated to place some constraints upon the size, character and evolutionary history of Archean volcanic plumbing, and hence indirectly, Archean tectonics. The earliest volcanism in any greenhouse belt is almost universally tholeitic basalt. Archean mafic magma chambers were usually the site of low pressure fractionation of olivine, plagioclase and later Cpx + or - an oxide phase during evolution of tholeitic liquids. Several models suggest basalt becoming more contaminated by sial with time. Data in the Uchi Subprovince shows early felsic volcanics to have fractionated REE patterns followed by flat REE pattern rhyolites. This is interpreted as initial felsic liquids produced by melting of a garnetiferous mafic source followed by large scale melting of LIL-rich sial. Rare andesites in the Uchi Subprovince are produced by basalt fractionation, direct mantle melts and mixing of basaltic and tonalitic liquids. Composite dikes in the Abitibi Subprovince have a basaltic edge with a chill margin, a rhyolitic interior with no basalt-rhyolite chill margin and partially melted sialic inclusions. Ignimbrites in the Uchi and Abitibi Subprovinces have mafic pumice toward the top. Integration of these data suggest initial mantle-derived basaltic liquids pond in a sialic crust, fractionate and melt sial. The inirial melts low in heavy REE are melts of mafic material, subsequently melting of adjacent sial produces a chamber with a felsic upper part underlain by mafic magma.
19. The Hadean-Archaean Environment
Sleep, Norman H.
A sparse geological record combined with physics and molecular phylogeny constrains the environmental conditions on the early Earth. The Earth began hot after the moon-forming impact and cooled to the point where liquid water was present in ∼10 million years Subsequently, a few asteroid impacts may have briefly heated surface environments, leaving only thermophile survivors in kilometer-deep rocks. A warm 500 K, 100 bar CO2 greenhouse persisted until subducted oceanic crust sequestered CO2 in...
20. Feldspar palaeo-isochrons from early Archaean TTGs: Pb-isotope evidence for a high U/Pb terrestrial Hadean crust (United States)
Kamber, B. S.; Whitehouse, M. J.; Moorbath, S.; Collerson, K. D.
Feldspar lead-isotope data for 22 early Archaean (3.80-3.82 Ga) tonalitic gneisses from an area south of the Isua greenstone belt (IGB),West Greenland, define a steep linear trend in common Pb-isotope space with an apparent age of 4480+/-77 Ma. Feldspars from interleaved amphibolites yield a similar array corresponding to a date of 4455+/-540 Ma. These regression lines are palaeo-isochrons that formed during feldspar-whole rock Pb-isotope homogenisation a long time (1.8 Ga) after rock formation but confirm the extreme antiquity (3.81 Ga) of the gneissic protoliths [1; this study]. Unlike their whole-rock counterparts, feldspar palaeo-isochrons are immune to rotational effects caused by the vagaries of U/Pb fractionation. Hence, comparison of their intercept with mantle Pb-isotope evolution models yields meaningful information regarding the source history of the magmatic precursors. The locus of intersection between the palaeo-isochrons and terrestrial mantle Pb-isotope evolution lines shows that the gneissic precursors of these 3.81 Ga gneisses were derived from a source with a substantially higher time-integrated U/Pb ratio than the mantle. Similar requirements for a high U/Pb source have been found for IGB BIF [2], IGB carbonate [3], and particularly IGB galenas [4]. Significantly, a single high U/Pb source that separated from the MORB-source mantle at ca. 4.3 Ga with a 238U/204Pb of ca. 10.5 provides a good fit to all these observations. In contrast to many previous models based on Nd and Hf-isotope evidence we propose that this reservoir was not a mantle source but the Hadean basaltic crust which, in the absence of an operating subduction process, encased the early Earth. Differentiation of the early high U/Pb basaltic crust could have occurred in response to gravitational sinking of cold mantle material or meteorite impact, and produced zircon-bearing magmatic rocks. The subchondritic Hf-isotope ratios of ca. 3.8 Ga zircons support this model [5] provided that
1. High-K calc-alkaline magmatism at the Archaean-Proterozoic boundary: implications for mantle metasomatism and continental crust petrogenesis. Example of the Bulai pluton (Central Limpopo Belt, South Africa) (United States)
The Neoarchaean Bulai pluton, intrusive within the supracrustal granulites of the Central Limpopo Belt (Limpopo Province, South Africa) is made up of large volumes of porphyritic granodiorites with subordinate enclaves and dykes which have monzodioritic and charno-enderbitic compositions. New U-Pb LA-ICP-MS dating on separated zircons yielded pluton emplacement ages ranging between 2.60 and 2.63 Ga, which are slightly older than previous proposed ages (~ 2.57-2.61 Ga). The whole-rock major- and trace-element composition of the Bulai pluton evidences unequivocal affinities with "high-Ti" late-Archaean sanukitoids. It belongs to a high-K calc-alkaline differentiation suite, with metaluminous affinities (0.7 affinities, such as eNd ranging between -0.5 and 0.5, and in addition, are very rich in all incompatible trace elements, which is particularly obvious in monzodioritic enclaves and enderbites where primitive mantle-normalized LILE and LREE contents are up to 300. These characteristics point to an enriched mantle source for the Bulai batholith. Chondrite normalized, REE patterns are strongly fractionated ([La/Yb]N ~ 25-80), mainly due to high LREE contents (LaN ~ 250-630), and chiefly high HFSE contents (Nb ~ 15-45 ppm ; up to 770 ppm Zr) indicate that the metasomatic agent is a silicic melt rather than a hydrous fluid. Moreover, based on high Nb/Ta, Th/Rb, La/Rb and low Sr/Nd and Ba/La, we suggest that the metasomatic agent is a granitic melt generated by melting of terrigenous sediments. Interactions of this melt with mantle peridotites implies that sediments are located under a mantle slice; geometry which is easily achieved in subduction zone settings. This conclusion is supported by the fact that Bulai trace element patterns are very similar to those of sub-actual potassic magmas generated in magmatic arc environments by interactions between mantle and terrigenous sediments (e.g. Sunda arc). Geochemical modeling indicates that the mafic facies of the Bulai
2. Lipids, lipid droplets and lipoproteins in their cellular context; an ultrastructural approach
NARCIS (Netherlands)
Mesman, R.J.
Lipids are essential for cellular life, functioning either organized as bilayer membranes to compartmentalize cellular processes, as signaling molecules or as metabolic energy storage. Our current knowledge on lipid organization and cellular lipid homeostasis is mainly based on biochemical data. How
3. Environment Aware Cellular Networks
KAUST Repository
Ghazzai, Hakim
The unprecedented rise of mobile user demand over the years have led to an enormous growth of the energy consumption of wireless networks as well as the greenhouse gas emissions which are estimated currently to be around 70 million tons per year. This significant growth of energy consumption impels network companies to pay huge bills which represent around half of their operating expenditures. Therefore, many service providers, including mobile operators, are looking for new and modern green solutions to help reduce their expenses as well as the level of their CO2 emissions. Base stations are the most power greedy element in cellular networks: they drain around 80% of the total network energy consumption even during low traffic periods. Thus, there is a growing need to develop more energy-efficient techniques to enhance the green performance of future 4G/5G cellular networks. Due to the problem of traffic load fluctuations in cellular networks during different periods of the day and between different areas (shopping or business districts and residential areas), the base station sleeping strategy has been one of the main popular research topics in green communications. In this presentation, we present several practical green techniques that provide significant gains for mobile operators. Indeed, combined with the base station sleeping strategy, these techniques achieve not only a minimization of the fossil fuel consumption but also an enhancement of mobile operator profits. We start with an optimized cell planning method that considers varying spatial and temporal user densities. We then use the optimal transport theory in order to define the cell boundaries such that the network total transmit power is reduced. Afterwards, we exploit the features of the modern electrical grid, the smart grid, as a new tool of power management for cellular networks and we optimize the energy procurement from multiple energy retailers characterized by different prices and pollutant
4. Cellular automata: structures
Ollinger, Nicolas
Jury : François Blanchard (Rapporteur), Marianne Delorme (Directeur), Jarkko Kari (Président), Jacques Mazoyer (Directeur), Dominique Perrin, Géraud Sénizergues (Rapporteur); Cellular automata provide a uniform framework to study an important problem of "complex systems" theory: how and why do system with a easily understandable -- local -- microscopic behavior can generate a more complicated -- global -- macroscopic behavior? Since its introduction in the 40s, a lot of work has been done to ...
5. Engineering Cellular Metabolism
DEFF Research Database (Denmark)
Nielsen, Jens; Keasling, Jay
Metabolic engineering is the science of rewiring the metabolism of cells to enhance production of native metabolites or to endow cells with the ability to produce new products. The potential applications of such efforts are wide ranging, including the generation of fuels, chemicals, foods, feeds...... of metabolic engineering and will discuss how new technologies can enable metabolic engineering to be scaled up to the industrial level, either by cutting off the lines of control for endogenous metabolism or by infiltrating the system with disruptive, heterologous pathways that overcome cellular regulation....
6. Literature Review on Dynamic Cellular Manufacturing System (United States)
Nouri Houshyar, A.; Leman, Z.; Pakzad Moghadam, H.; Ariffin, M. K. A. M.; Ismail, N.; Iranmanesh, H.
In previous decades, manufacturers faced a lot of challenges because of globalization and high competition in markets. These problems arise from shortening product life cycle, rapid variation in demand of products, and also rapid changes in manufcaturing technologies. Nowadays most manufacturing companies expend considerable attention for improving flexibility and responsiveness in order to overcome these kinds of problems and also meet customer's needs. By considering the trend toward the shorter product life cycle, the manufacturing environment is towards manufacturing a wide variety of parts in small batches [1]. One of the major techniques which are applied for improving manufacturing competitiveness is Cellular Manufacturing System (CMS). CMS is type of manufacturing system which tries to combine flexibility of job shop and also productivity of flow shop. In addition, Dynamic cellular manufacturing system which considers different time periods for the manufacturing system becomes an important topic and attracts a lot of attention to itself. Therefore, this paper made attempt to have a brief review on this issue and focused on all published paper on this subject. Although, this topic gains a lot of attention to itself during these years, none of previous researchers focused on reviewing the literature of that which can be helpful and useful for other researchers who intend to do the research on this topic. Therefore, this paper is the first study which has focused and reviewed the literature of dynamic cellular manufacturing system.
7. Astrobiological Complexity with Probabilistic Cellular Automata
CERN Document Server
Vukotić, B
Search for extraterrestrial life and intelligence constitutes one of the major endeavors in science, but has yet been quantitatively modeled only rarely and in a cursory and superficial fashion. We argue that probabilistic cellular automata (PCA) represent the best quantitative framework for modeling astrobiological history of the Milky Way and its Galactic Habitable Zone. The relevant astrobiological parameters are to be modeled as the elements of the input probability matrix for the PCA kernel. With the underlying simplicity of the cellular automata constructs, this approach enables a quick analysis of large and ambiguous input parameters' space. We perform a simple clustering analysis of typical astrobiological histories and discuss the relevant boundary conditions of practical importance for planning and guiding actual empirical astrobiological and SETI projects. In addition to showing how the present framework is adaptable to more complex situations and updated observational databases from current and ne...
8. Cellular image classification
CERN Document Server
Xu, Xiang; Lin, Feng
This book introduces new techniques for cellular image feature extraction, pattern recognition and classification. The authors use the antinuclear antibodies (ANAs) in patient serum as the subjects and the Indirect Immunofluorescence (IIF) technique as the imaging protocol to illustrate the applications of the described methods. Throughout the book, the authors provide evaluations for the proposed methods on two publicly available human epithelial (HEp-2) cell datasets: ICPR2012 dataset from the ICPR'12 HEp-2 cell classification contest and ICIP2013 training dataset from the ICIP'13 Competition on cells classification by fluorescent image analysis. First, the reading of imaging results is significantly influenced by one’s qualification and reading systems, causing high intra- and inter-laboratory variance. The authors present a low-order LP21 fiber mode for optical single cell manipulation and imaging staining patterns of HEp-2 cells. A focused four-lobed mode distribution is stable and effective in optical...
9. Multiuser Cellular Network
CERN Document Server
Bao, Yi; Chen, Ming
Modern radio communication is faced with a problem about how to distribute restricted frequency to users in a certain space. Since our task is to minimize the number of repeaters, a natural idea is enlarging coverage area. However, coverage has restrictions. First, service area has to be divided economically as repeater's coverage is limited. In this paper, our fundamental method is to adopt seamless cellular network division. Second, underlying physics content in frequency distribution problem is interference between two close frequencies. Consequently, we choose a proper frequency width of 0.1MHz and a relevantly reliable setting to apply one frequency several times. We make a few general assumptions to simplify real situation. For instance, immobile users yield to homogenous distribution; repeaters can receive and transmit information in any given frequency in duplex operation; coverage is mainly decided by antenna height. Two models are built up to solve 1000 users and 10000 users situations respectively....
10. Engineering Cellular Metabolism. (United States)
Nielsen, Jens; Keasling, Jay D
Metabolic engineering is the science of rewiring the metabolism of cells to enhance production of native metabolites or to endow cells with the ability to produce new products. The potential applications of such efforts are wide ranging, including the generation of fuels, chemicals, foods, feeds, and pharmaceuticals. However, making cells into efficient factories is challenging because cells have evolved robust metabolic networks with hard-wired, tightly regulated lines of communication between molecular pathways that resist efforts to divert resources. Here, we will review the current status and challenges of metabolic engineering and will discuss how new technologies can enable metabolic engineering to be scaled up to the industrial level, either by cutting off the lines of control for endogenous metabolism or by infiltrating the system with disruptive, heterologous pathways that overcome cellular regulation.
11. Cellular bioluminescence imaging. (United States)
Welsh, David K; Noguchi, Takako
Bioluminescence imaging of live cells has recently been recognized as an important alternative to fluorescence imaging. Fluorescent probes are much brighter than bioluminescent probes (luciferase enzymes) and, therefore, provide much better spatial and temporal resolution and much better contrast for delineating cell structure. However, with bioluminescence imaging there is virtually no background or toxicity. As a result, bioluminescence can be superior to fluorescence for detecting and quantifying molecules and their interactions in living cells, particularly in long-term studies. Structurally diverse luciferases from beetle and marine species have been used for a wide variety of applications, including tracking cells in vivo, detecting protein-protein interactions, measuring levels of calcium and other signaling molecules, detecting protease activity, and reporting circadian clock gene expression. Such applications can be optimized by the use of brighter and variously colored luciferases, brighter microscope optics, and ultrasensitive, low-noise cameras. This article presents a review of how bioluminescence differs from fluorescence, its applications to cellular imaging, and available probes, optics, and detectors. It also gives practical suggestions for optimal bioluminescence imaging of single cells.
12. Cellular neurothekeoma with melanocytosis. (United States)
Wu, Ren-Chin; Hsieh, Yi-Yueh; Chang, Yi-Chin; Kuo, Tseng-Tong
Cellular neurothekeoma (CNT) is a benign dermal tumor mainly affecting the head and neck and the upper extremities. It is characterized histologically by interconnecting fascicles of plump spindle or epithelioid cells with ample cytoplasm infiltrating in the reticular dermis. The histogenesis of CNT has been controversial, although it is generally regarded as an immature counterpart of classic/myxoid neurothekeoma, a tumor with nerve sheath differentiation. Two rare cases of CNT containing melanin-laden cells were described. Immunohistochemical study with NKI/C3, vimentin, epithelial membrane antigen, smooth muscle antigen, CD34, factor XIIIa, collagen type IV, S100 protein and HMB-45 was performed. Both cases showed typical growth pattern of CNT with interconnecting fascicles of epithelioid cells infiltrating in collagenous stroma. One of the nodules contained areas exhibiting atypical cytological features. Melanin-laden epithelioid or dendritic cells were diffusely scattered throughout one nodule, and focally present in the peripheral portion of the other nodule. Both nodules were strongly immunoreactive to NKI/C3 and vimentin, but negative to all the other markers employed. CNT harboring melanin-laden cells may pose diagnostic problems because of their close resemblance to nevomelanocytic lesions and other dermal mesenchymal tumors. These peculiar cases may also provide further clues to the histogenesis of CNT.
13. Free fall and cellular automata
Directory of Open Access Journals (Sweden)
Pablo Arrighi
Full Text Available Three reasonable hypotheses lead to the thesis that physical phenomena can be described and simulated with cellular automata. In this work, we attempt to describe the motion of a particle upon which a constant force is applied, with a cellular automaton, in Newtonian physics, in Special Relativity, and in General Relativity. The results are very different for these three theories.
14. About Strongly Universal Cellular Automata
Directory of Open Access Journals (Sweden)
Maurice Margenstern
Full Text Available In this paper, we construct a strongly universal cellular automaton on the line with 11 states and the standard neighbourhood. We embed this construction into several tilings of the hyperbolic plane and of the hyperbolic 3D space giving rise to strongly universal cellular automata with 10 states.
15. Life Sciences Conference ’From Enzymology to Cellular Biology’. (United States)
polyacrylamide of renin. More than 80 years elapsed gel electrophoresis). Multienzyme com- between the discovery of the pressor ef- plexes containing the same...plications involve mainly hydrolytic re- an E. coZi host/vector system. Using actions. Starch and protein hydrolysis site-specific mutagenesis techniques
16. MIMO Communication for Cellular Networks
CERN Document Server
Huang, Howard; Venkatesan, Sivarama
As the theoretical foundations of multiple-antenna techniques evolve and as these multiple-input multiple-output (MIMO) techniques become essential for providing high data rates in wireless systems, there is a growing need to understand the performance limits of MIMO in practical networks. To address this need, MIMO Communication for Cellular Networks presents a systematic description of MIMO technology classes and a framework for MIMO system design that takes into account the essential physical-layer features of practical cellular networks. In contrast to works that focus on the theoretical performance of abstract MIMO channels, MIMO Communication for Cellular Networks emphasizes the practical performance of realistic MIMO systems. A unified set of system simulation results highlights relative performance gains of different MIMO techniques and provides insights into how best to use multiple antennas in cellular networks under various conditions. MIMO Communication for Cellular Networks describes single-user,...
17. Cellular systems biology profiling applied to cellular models of disease. (United States)
Giuliano, Kenneth A; Premkumar, Daniel R; Strock, Christopher J; Johnston, Patricia; Taylor, Lansing
Building cellular models of disease based on the approach of Cellular Systems Biology (CSB) has the potential to improve the process of creating drugs as part of the continuum from early drug discovery through drug development and clinical trials and diagnostics. This paper focuses on the application of CSB to early drug discovery. We discuss the integration of protein-protein interaction biosensors with other multiplexed, functional biomarkers as an example in using CSB to optimize the identification of quality lead series compounds.
18. A Course in Cellular Bioengineering. (United States)
Lauffenburger, Douglas A.
Gives an overview of a course in chemical engineering entitled "Cellular Bioengineering," dealing with how chemical engineering principles can be applied to molecular cell biology. Topics used are listed and some key references are discussed. Listed are 85 references. (YP)
19. Energy Landscape of Cellular Networks (United States)
Wang, Jin
Cellular Networks are in general quite robust and perform their biological functions against the environmental perturbations. Progresses have been made from experimental global screenings, topological and engineering studies. However, there are so far few studies of why the network should be robust and perform biological functions from global physical perspectives. In this work, we will explore the global properties of the network from physical perspectives. The aim of this work is to develop a conceptual framework and quantitative physical methods to study the global nature of the cellular network. The main conclusion of this presentation is that we uncovered the underlying energy landscape for several small cellular networks such as MAPK signal transduction network and gene regulatory networks, from the experimentally measured or inferred inherent chemical reaction rates. The underlying dynamics of these networks can show bi-stable as well as oscillatory behavior. The global shapes of the energy landscapes of the underlying cellular networks we have studied are robust against perturbations of the kinetic rates and environmental disturbances through noise. We derived a quantitative criterion for robustness of the network function from the underlying landscape. It provides a natural explanation of the robustness and stability of the network for performing biological functions. We believe the robust landscape is a global universal property for cellular networks. We believe the robust landscape is a quantitative realization of Darwinian principle of natural selection at the cellular network level. It may provide a novel algorithm for optimizing the network connections, which is crucial for the cellular network design and synthetic biology. Our approach is general and can be applied to other cellular networks.
20. Mathematical Modeling of Cellular Metabolism. (United States)
Berndt, Nikolaus; Holzhütter, Hermann-Georg
Cellular metabolism basically consists of the conversion of chemical compounds taken up from the extracellular environment into energy (conserved in energy-rich bonds of organic phosphates) and a wide array of organic molecules serving as catalysts (enzymes), information carriers (nucleic acids), and building blocks for cellular structures such as membranes or ribosomes. Metabolic modeling aims at the construction of mathematical representations of the cellular metabolism that can be used to calculate the concentration of cellular molecules and the rates of their mutual chemical interconversion in response to varying external conditions as, for example, hormonal stimuli or supply of essential nutrients. Based on such calculations, it is possible to quantify complex cellular functions as cellular growth, detoxification of drugs and xenobiotic compounds or synthesis of exported molecules. Depending on the specific questions to metabolism addressed, the methodological expertise of the researcher, and available experimental information, different conceptual frameworks have been established, allowing the usage of computational methods to condense experimental information from various layers of organization into (self-) consistent models. Here, we briefly outline the main conceptual frameworks that are currently exploited in metabolism research.
1. A Giant Vulvar Mass: A Case Study of Cellular Angiofibroma
Directory of Open Access Journals (Sweden)
Ümit Aydın
Full Text Available Cellular angiofibroma is a mesenchymal tumor that affects both genders. Nucci et al. first described it in 1997. Cellular angiofibroma is generally a small and asymptomatic mass that primarily arises in the vulvar-vaginal region, although rare cases have been reported in the pelvic and extrapelvic regions. It affects women most often during the fifth decade of life. The treatment requires simple local excision due to low local recurrence and no chance of metastasization. The current study presents a case of angiofibroma in the vulvar region that measured approximately 20 cm.
2. Is Glutathione the Major Cellular Target of Cisplatin?
DEFF Research Database (Denmark)
Kasherman, Yonit; Stürup, Stefan; gibson, dan
Cisplatin is an anticancer drug whose efficacy is limited because tumors develop resistance to the drug. Resistant cells often have elevated levels of cellular glutathione (GSH), believed to be the major cellular target of cisplatin that inactivates the drug by binding to it irreversibly, forming...... [Pt(SG)2] adducts. We show by [1H,15N] HSQC that the half-life of 15N labeled cisplatin in whole cell extracts is 75 min, but no Pt-GSH adducts were observed. When the low molecular mass fraction (cisplatin, binding to GSH was observed probably due to removal...
3. Planets and Life (United States)
Sullivan, Woodruff T., III; Baross, John
4. Hierarchical Cellular Structures in High-Capacity Cellular Communication Systems
CERN Document Server
Jain, R K; Agrawal, N K
In the prevailing cellular environment, it is important to provide the resources for the fluctuating traffic demand exactly in the place and at the time where and when they are needed. In this paper, we explored the ability of hierarchical cellular structures with inter layer reuse to increase the capacity of mobile communication network by applying total frequency hopping (T-FH) and adaptive frequency allocation (AFA) as a strategy to reuse the macro and micro cell resources without frequency planning in indoor pico cells [11]. The practical aspects for designing macro- micro cellular overlays in the existing big urban areas are also explained [4]. Femto cells are inducted in macro / micro / pico cells hierarchical structure to achieve the required QoS cost effectively.
5. Simulating Complex Systems by Cellular Automata
CERN Document Server
Kroc, Jiri; Hoekstra, Alfons G
Deeply rooted in fundamental research in Mathematics and Computer Science, Cellular Automata (CA) are recognized as an intuitive modeling paradigm for Complex Systems. Already very basic CA, with extremely simple micro dynamics such as the Game of Life, show an almost endless display of complex emergent behavior. Conversely, CA can also be designed to produce a desired emergent behavior, using either theoretical methodologies or evolutionary techniques. Meanwhile, beyond the original realm of applications - Physics, Computer Science, and Mathematics – CA have also become work horses in very different disciplines such as epidemiology, immunology, sociology, and finance. In this context of fast and impressive progress, spurred further by the enormous attraction these topics have on students, this book emerges as a welcome overview of the field for its practitioners, as well as a good starting point for detailed study on the graduate and post-graduate level. The book contains three parts, two major parts on th...
6. Classifying cellular automata using grossone (United States)
D'Alotto, Louis
This paper proposes an application of the Infinite Unit Axiom and grossone, introduced by Yaroslav Sergeyev (see [7] - [12]), to the development and classification of one and two-dimensional cellular automata. By the application of grossone, new and more precise nonarchimedean metrics on the space of definition for one and two-dimensional cellular automata are established. These new metrics allow us to do computations with infinitesimals. Hence configurations in the domain space of cellular automata can be infinitesimally close (but not equal). That is, they can agree at infinitely many places. Using the new metrics, open disks are defined and the number of points in each disk computed. The forward dynamics of a cellular automaton map are also studied by defined sets. It is also shown that using the Infinite Unit Axiom, the number of configurations that follow a given configuration, under the forward iterations of cellular automaton maps, can now be computed and hence a classification scheme developed based on this computation.
7. Prognosis of Different Cellular Generations
Directory of Open Access Journals (Sweden)
Preetish Ranjan
Full Text Available Technological advancement in mobile telephony from 1G to 3G, 4G and 5G has a very axiomatic fact that made an entire world a global village. The cellular system employs a different design approach and technology that most commercial radio and television system use. In the cellular system, the service area is divided into cells and a transmitter is designed to serve an individual cell. The system seeks to make efficient use of available channels by using low-power transmitters to allow frequency reuse at a smaller distance. Maximizing the number of times each channel can be reused in a given geographical area is the key to an efficient cellular system design. During the past three decades, the world has seen significant changes in telecommunications industry. There have been some remarkable aspects to the rapid growth in wireless communications, as seen by the large expansion in mobile systems. This paper focuses on “Past, Present & Future of Cellular Telephony” and some light has been thrown upon the technologies of the cellular systems, namely 1G, 2G, 2.5G, 3G and future generations like 4G and 5G systems as well.
8. Cellular Auxin Homeostasis:Gatekeeping Is Housekeeping
Institute of Scientific and Technical Information of China (English)
Michel Ruiz Rosquete; Elke Barbez; Jürgen Kleine-Vehn
The phytohormone auxin is essential for plant development and contributes to nearly every aspect of the plant life cycle.The spatio-temporal distribution of auxin depends on a complex interplay between auxin metabolism and cell-to-cell auxin transport.Auxin metabolism and transport are both crucial for plant development;however,it largely remains to be seen how these processes are integrated to ensure defined cellular auxin levels or even gradients within tissues or organs.In this review,we provide a glance at very diverse topics of auxin biology,such as biosynthesis,conjugation,oxidation,and transport of auxin.This broad,but certainly superficial,overview highlights the mutual importance of auxin metabolism and transport.Moreover,it allows pinpointing how auxin metabolism and transport get integrated to jointly regulate cellular auxin homeostasis.Even though these processes have been so far only separately studied,we assume that the phytohormonal crosstalk integrates and coordinates auxin metabolism and transport.Besides the integrative power of the global hormone signaling,we additionally introduce the hypothetical concept considering auxin transport components as gatekeepers for auxin responses.
9. Novel Materials for Cellular Nanosensors
DEFF Research Database (Denmark)
Sasso, Luigi
The monitoring of cellular behavior is useful for the advancement of biomedical diagnostics, drug development and the understanding of a cell as the main unit of the human body. Micro- and nanotechnology allow for the creation of functional devices that enhance the study of cellular dynamics...... modifications for electrochemical nanosensors for the detection of analytes released from cells. Two type of materials were investigated, each pertaining to the two different aspects of such devices: peptide nanostructures were studied for the creation of cellular sensing substrates that mimic in vivo surfaces...... and that offer advantages of functionalization, and conducting polymers were used as electrochemical sensor surface modifications for increasing the sensitivity towards relevant analytes, with focus on the detection of dopamine released from cells via exocytosis. Vertical peptide nanowires were synthesized from...
10. Cellular models for Parkinson's disease. (United States)
Falkenburger, Björn H; Saridaki, Theodora; Dinter, Elisabeth
Developing new therapeutic strategies for Parkinson's disease requires cellular models. Current models reproduce the two most salient changes found in the brains of patients with Parkinson's disease: The degeneration of dopaminergic neurons and the existence of protein aggregates consisting mainly of α-synuclein. Cultured cells offer many advantages over studying Parkinson's disease directly in patients or in animal models. At the same time, the choice of a specific cellular model entails the requirement to focus on one aspect of the disease while ignoring others. This article is intended for researchers planning to use cellular models for their studies. It describes for commonly used cell types the aspects of Parkinson's disease they model along with technical advantages and disadvantages. It might also be helpful for researchers from other fields consulting literature on cellular models of Parkinson's disease. Important models for the study of dopaminergic neuron degeneration include Lund human mesencephalic cells and primary neurons, and a case is made for the use of non-dopaminergic cells to model pathogenesis of non-motor symptoms of Parkinson's disease. With regard to α-synuclein aggregates, this article describes strategies to induce and measure aggregates with a focus on fluorescent techniques. Cellular models reproduce the two most salient changes of Parkinson's disease, the degeneration of dopaminergic neurons and the existence of α-synuclein aggregates. This article is intended for researchers planning to use cellular models for their studies. It describes for commonly used cell types and treatments the aspects of Parkinson's disease they model along with technical advantages and disadvantages. Furthermore, this article describes strategies to induce and measure aggregates with a focus on fluorescent techniques. This article is part of a special issue on Parkinson disease.
Directory of Open Access Journals (Sweden)
Full Text Available Cellular interactions involve many types of cell surface molecules and operate via homophilic and/or heterophilic protein-protein and protein-carbohydrate binding. Our investigations in different model-systems (marine invertebrates and mammals have provided direct evidence that a novel class of primordial proteoglycans, named by us gliconectins, can mediate cell adhesion via a new alternative molecular mechanism of polyvalent carbohydrate-carbohydrate binding. Biochemical characterization of isolated and purified glyconectins revealed the presence of specific carbohydrate structures, acidic glycans, different from classical glycosaminoglycans. Such acidic glycans of high molecular weight containing fucose, glucuronic or galacturonic acids, and sulfate groups, originally found in sponges and sea urchin embryos, may represent a new class of carbohydrate carcino-embryonal antigens in mice and humans. Such interactions between biological macromolecules are usually investigated by kinetic binding studies, calorimetric methods, X-ray diffraction, nuclear magnetic resonance, and other spectroscopic analyses. However, these methods do not supply a direct estimation of the intermolecular binding forces that are fundamental for the function of the ligand-receptor association. Recently, we have introduced atomic force microscopy to quantify the binding strength between cell adhesion proteoglycans. Measurement of binding forces intrinsic to cell adhesion proteoglycans is necessary to assess their contribution to the maintenance of the anatomical integrity of multicellular organisms. As a model, we selected the glyconectin 1, a cell adhesion proteoglycan isolated from the marine sponge Microciona prolifera. This glyconectin mediates in vivo cell recognition and aggregation via homophilic, species-specific, polyvalent, and calcium ion-dependent carbohydrate-carbohydrate interactions. Under physiological conditions, an adhesive force of up to 400 piconewtons
12. Cellular basis of Alzheimer's disease. (United States)
Bali, Jitin; Halima, Saoussen Ben; Felmy, Boas; Goodger, Zoe; Zurbriggen, Sebastian; Rajendran, Lawrence
Alzheimer's disease (AD) is the most common form of neurodegenerative disease. A characteristic feature of the disease is the presence of amyloid-β (Aβ) which either in its soluble oligomeric form or in the plaque-associated form is causally linked to neurodegeneration. Aβ peptide is liberated from the membrane-spanning -amyloid precursor protein by sequential proteolytic processing employing β- and γ-secretases. All these proteins involved in the production of Aβ peptide are membrane associated and hence, membrane trafficking and cellular compartmentalization play important roles. In this review, we summarize the key cellular events that lead to the progression of AD.
13. Family Life (United States)
... With Family and Friends > Family Life Request Permissions Family Life Approved by the Cancer.Net Editorial Board , ... your outlook on the future. Friends and adult family members The effects of cancer on your relationships ...
14. On Cellular MIMO Channel Capacity (United States)
Adachi, Koichi; Adachi, Fumiyuki; Nakagawa, Masao
To increase the transmission rate without bandwidth expansion, the multiple-input multiple-output (MIMO) technique has recently been attracting much attention. The MIMO channel capacity in a cellular system is affected by the interference from neighboring co-channel cells. In this paper, we introduce the cellular channel capacity and evaluate its outage capacity, taking into account the frequency-reuse factor, path loss exponent, standard deviation of shadowing loss, and transmission power of a base station (BS). Furthermore, we compare the cellular MIMO downlink channel capacity with those of other multi-antenna transmission techniques such as single-input multiple-output (SIMO) and space-time block coded multiple-input single-output (STBC-MISO). We show that the optimum frequency-reuse factor F that maximizes 10%-outage capacity is 3 and both 50%- and 90%-outage capacities is 1 irrespective of the type of multi-antenna transmission technique, where q%-outage capacity is defined as the channel capacity that gives an outage probability of q%. We also show that the cellular MIMO channel capacity is always higher than those of SIMO and STBC-MISO.
15. Cellular uptake of metallated cobalamins
DEFF Research Database (Denmark)
Tran, MQT; Stürup, Stefan; Lambert, Ian H.;
Cellular uptake of vitamin B12-cisplatin conjugates was estimated via detection of their metal constituents (Co, Pt, and Re) by inductively coupled plasma mass spectrometry (ICP-MS). Vitamin B12 (cyano-cob(iii)alamin) and aquo-cob(iii)alamin [Cbl-OH2](+), which differ in the β-axial ligands (CN(-...
16. Geodynamic evolution of the West Africa between 2.2 and 2 Ga: the Archaean style of the Birimian greenstone belts and the sedimentary basins in northeastern Ivory-Coast; Evolution de l`Afrique de l`Ouest entre 2,2 Ga et 2 Ga: le style archeen des ceintures vertes et des ensembles sedimentaires birimiens du nord-est de la Cote-d`Ivoire
Energy Technology Data Exchange (ETDEWEB)
Vidal, M.; Pouclet, A. [Orleans Univ., 45 (France); Delor, C. [Bureau de Recherches Geologiques et Minieres (BRGM), 45 - Orleans (France); Simeon, Y. [ANTEA, 45 - Orleans (France); Alric, G. [Etablissements Binou, 27 - Le Mesnil-Fuguet (France)
The litho-structural features of Palaeo-proterozoic terrains of northeastern Ivory-Coast, greenstones belts and then sedimentary (basin Birimian), are similar to those of Archaean terrains. Their early deformation is only voluminal deformation due to granitoid intrusions, mainly between 2.2 and 2.16 Ga. The shortening deformation (main deformation) is expressed by right folds and transcurrent shear zones ca 2.1 Ga. Neither thrust deformation nor high pressure metamorphic assemblages are known. This pattern of flexible and hot crust, at least between 2.2 and 2.16 Ga, is pole apart to a collisional pattern, proposed for West African Craton by some authors. The Archaean/Palaeo-proterozoic boundary would not represent a drastic change of the geodynamic evolution of the crust. (authors). 60 refs., 5 figs., 6 photos.
17. Bioinspired Cellular Structures: Additive Manufacturing and Mechanical Properties (United States)
Stampfl, J.; Pettermann, H. E.; Liska, R.
Biological materials (e.g., wood, trabecular bone, marine skeletons) rely heavily on the use of cellular architecture, which provides several advantages. (1) The resulting structures can bear the variety of "real life" load spectra using a minimum of a given bulk material, featuring engineering lightweight design principles. (2) The inside of the structures is accessible to body fluids which deliver the required nutrients. (3) Furthermore, cellular architectures can grow organically by adding or removing individual struts or by changing the shape of the constituting elements. All these facts make the use of cellular architectures a reasonable choice for nature. Using additive manufacturing technologies (AMT), it is now possible to fabricate such structures for applications in engineering and biomedicine. In this chapter, we present methods that allow the 3D computational analysis of the mechanical properties of cellular structures with open porosity. Various different cellular architectures including disorder are studied. In order to quantify the influence of architecture, the apparent density is always kept constant. Furthermore, it is shown that how new advanced photopolymers can be used to tailor the mechanical and functional properties of the fabricated structures.
18. Reversibly assembled cellular composite materials. (United States)
Cheung, Kenneth C; Gershenfeld, Neil
We introduce composite materials made by reversibly assembling a three-dimensional lattice of mass-produced carbon fiber-reinforced polymer composite parts with integrated mechanical interlocking connections. The resulting cellular composite materials can respond as an elastic solid with an extremely large measured modulus for an ultralight material (12.3 megapascals at a density of 7.2 milligrams per cubic centimeter). These materials offer a hierarchical decomposition in modeling, with bulk properties that can be predicted from component measurements and deformation modes that can be determined by the placement of part types. Because site locations are locally constrained, structures can be produced in a relative assembly process that merges desirable features of fiber composites, cellular materials, and additive manufacturing.
19. Picturing Life
Directory of Open Access Journals (Sweden)
Molly Bathje MS, OTR/L
Full Text Available The cover art of the summer 2013 issue of The Open Journal of Occupational Therapy provided by Jonathan Darnall reflects his unique life perspective, current roles, and values. An exploration of Jon’s life experience reveals how creative arts, including photography, have positively influenced his life and inform OT practitioners about the benefits of photography as an intervention and an occupation.
20. Glycosylation regulates prestin cellular activity. (United States)
Rajagopalan, Lavanya; Organ-Darling, Louise E; Liu, Haiying; Davidson, Amy L; Raphael, Robert M; Brownell, William E; Pereira, Fred A
Glycosylation is a common post-translational modification of proteins and is implicated in a variety of cellular functions including protein folding, degradation, sorting and trafficking, and membrane protein recycling. The membrane protein prestin is an essential component of the membrane-based motor driving electromotility changes (electromotility) in the outer hair cell (OHC), a central process in auditory transduction. Prestin was earlier identified to possess two N-glycosylation sites (N163, N166) that, when mutated, marginally affect prestin nonlinear capacitance (NLC) function in cultured cells. Here, we show that the double mutant prestin(NN163/166AA) is not glycosylated and shows the expected NLC properties in the untreated and cholesterol-depleted HEK 293 cell model. In addition, unlike WT prestin that readily forms oligomers, prestin(NN163/166AA) is enriched as monomers and more mobile in the plasma membrane, suggesting that oligomerization of prestin is dependent on glycosylation but is not essential for the generation of NLC in HEK 293 cells. However, in the presence of increased membrane cholesterol, unlike the hyperpolarizing shift in NLC seen with WT prestin, cells expressing prestin(NN163/166AA) exhibit a linear capacitance function. In an attempt to explain this finding, we discovered that both WT prestin and prestin(NN163/166AA) participate in cholesterol-dependent cellular trafficking. In contrast to WT prestin, prestin(NN163/166AA) shows a significant cholesterol-dependent decrease in cell-surface expression, which may explain the loss of NLC function. Based on our observations, we conclude that glycosylation regulates self-association and cellular trafficking of prestin(NN163/166AA). These observations are the first to implicate a regulatory role for cellular trafficking and sorting in prestin function. We speculate that the cholesterol regulation of prestin occurs through localization to and internalization from membrane microdomains by
1. Stochastic Nature in Cellular Processes
Institute of Scientific and Technical Information of China (English)
刘波; 刘圣君; 王祺; 晏世伟; 耿轶钊; SAKATA Fumihiko; GAO Xing-Fa
The importance of stochasticity in cellular processes is increasingly recognized in both theoretical and experimental studies. General features of stochasticity in gene regulation and expression are briefly reviewed in this article, which include the main experimental phenomena, classification, quantization and regulation of noises. The correlation and transmission of noise in cascade networks are analyzed further and the stochastic simulation methods that can capture effects of intrinsic and extrinsic noise are described.
2. Cellular fiber–reinforced concrete
Isachenko S.; Kodzoev M.
Methods disperse reinforcement of concrete matrix using polypropylene, glass, basalt and metal fibers allows to make the construction of complex configuration, solve the problem of frost products. Dispersed reinforcement reduces the overall weight of the structures. The fiber replaces the secondary reinforcement, reducing the volume of use of structural steel reinforcement. Cellular Fiber concretes are characterized by high-performance properties, especially increased bending strength and...
3. Identification of Nonstationary Cellular Automata
Institute of Scientific and Technical Information of China (English)
The principal feature of nonstationary cellular automata(NCA) is that a local transitiol rule of each cell is changed at each time step depending on neighborhood configuration at previous time step.The identification problem for NCA is extraction of local transition rules and the establishment of mechanism for changing these rules using sequence of NCA configurations.We present serial and parallel algorithms for identification of NCA.
Popescu, O.; Sumanovski, L. T.; I. Checiu; Elisabeta Popescu; G. N. Misevic
Cellular interactions involve many types of cell surface molecules and operate via homophilic and/or heterophilic protein-protein and protein-carbohydrate binding. Our investigations in different model-systems (marine invertebrates and mammals) have provided direct evidence that a novel class of primordial proteoglycans, named by us gliconectins, can mediate cell adhesion via a new alternative molecular mechanism of polyvalent carbohydrate-carbohydrate binding. Biochemical characterization of...
5. The insect cellular immune response
Institute of Scientific and Technical Information of China (English)
Michael R. Strand
The innate immune system of insects is divided into humoral defenses that include the production of soluble effector molecules and cellular defenses like phagocytosis and encapsulation that are mediated by hemocytes. This review summarizes current understanding of the cellular immune response. Insects produce several terminally differentiated types of hemocytes that are distinguished by morphology, molecular and antigenic markers, and function. The differentiated hemocytes that circulate in larval or nymphal stage insects arise from two sources: progenitor cells produced during embryogenesis and mesodermally derived hematopoietic organs. Regulation of hematopoiesis and hemocyte differentiation also involves several different signaling pathways. Phagocytosis and encapsulation require that hemocytes first recognize a given target as foreign followed by activation of downstream signaling and effector responses. A number of humoral and cellular receptors have been identified that recognize different microbes and multicellular parasites. In turn, activation of these receptors stimulates a number of signaling pathways that regulate different hemocyte functions. Recent studies also identify hemocytes as important sources of a number of humoral effector molecules required for killing different foreign invaders.
6. Progress of cellular dedifferentiation research
Institute of Scientific and Technical Information of China (English)
LIU Hu-xian; HU Da-hai; JIA Chi-yu; FU Xiao-bing
Differentiation, the stepwise specialization of cells, and transdifferentiation, the apparent switching of one cell type into another, capture much of the stem cell spotlight. But dedifferentiation, the developmental reversal of a cell before it reinvents itself, is an important process too. In multicellular organisms, cellular dedifferentiation is the major process underlying totipotency, regeneration and formation of new stem cell lineages. In humans,dedifferentiation is often associated with carcinogenesis.The study of cellular dedifferentiation in animals,particularly early events related to cell fate-switch and determination, is limited by the lack of a suitable,convenient experimental system. The classic example of dedifferentiation is limb and tail regeneration in urodele amphibians, such as salamanders. Recently, several investigators have shown that certain mammalian cell types can be induced to dedifferentiate to progenitor cells when stimulated with the appropriate signals or materials. These discoveries open the possibility that researchers might enhance the endogenous regenerative capacity of mammals by inducing cellular dedifferentiation in vivo.
7. Cellular communications a comprehensive and practical guide
CERN Document Server
Tripathi, Nishith
Even as newer cellular technologies and standards emerge, many of the fundamental principles and the components of the cellular network remain the same. Presenting a simple yet comprehensive view of cellular communications technologies, Cellular Communications provides an end-to-end perspective of cellular operations, ranging from physical layer details to call set-up and from the radio network to the core network. This self-contained source forpractitioners and students represents a comprehensive survey of the fundamentals of cellular communications and the landscape of commercially deployed
8. Life sciences
Energy Technology Data Exchange (ETDEWEB)
Day, L. (ed.)
This document is the 1989--1990 Annual Report for the Life Sciences Divisions of the University of California/Lawrence Berkeley Laboratory. Specific progress reports are included for the Cell and Molecular Biology Division, the Research Medicine and Radiation Biophysics Division (including the Advanced Light Source Life Sciences Center), and the Chemical Biodynamics Division. 450 refs., 46 figs. (MHB)
9. What Little Remains of Life
Directory of Open Access Journals (Sweden)
Ben G. Yacobi
Full Text Available Life is a non-equilibrium process involving a series of biochemical reactions that use external energy to build the cellular structure and the complexity of the organism. Humans strive for the continuation of their existence. This can be based on an illusory afterlife according to religion or on practical efforts through technology. But the temporality of individual lives is inevitable. Death in the universe, governed by the law of entropy, is unavoidable. Thus, as all traces of human existence fade away,what is most important in life is what one thinks and does at the present moment, when one is fully aware of life. Capturing each moment and filling it with some meaning is the only consolation in life.
10. Cellular immune responses to HIV (United States)
McMichael, Andrew J.; Rowland-Jones, Sarah L.
The cellular immune response to the human immunodeficiency virus, mediated by T lymphocytes, seems strong but fails to control the infection completely. In most virus infections, T cells either eliminate the virus or suppress it indefinitely as a harmless, persisting infection. But the human immunodeficiency virus undermines this control by infecting key immune cells, thereby impairing the response of both the infected CD4+ T cells and the uninfected CD8+ T cells. The failure of the latter to function efficiently facilitates the escape of virus from immune control and the collapse of the whole immune system.
11. Repaglinide at a cellular level
DEFF Research Database (Denmark)
Krogsgaard Thomsen, M; Bokvist, K; Høy, M
To investigate the hormonal and cellular selectivity of the prandial glucose regulators, we have undertaken a series of experiments, in which we characterised the effects of repaglinide and nateglinide on ATP-sensitive potassium ion (KATP) channel activity, membrane potential and exocytosis in rat...... pancreatic alpha-cells and somatotrophs. We found a pharmacological dissociation between the actions on KATP channels and exocytosis and suggest that compounds that, unlike repaglinide, have direct stimulatory effects on exocytosis in somatotrophs and alpha- and beta-cells, such as sulphonylureas...
12. ING proteins in cellular senescence. (United States)
Menéndez, Camino; Abad, María; Gómez-Cabello, Daniel; Moreno, Alberto; Palmero, Ignacio
Cellular senescence is an effective anti-tumor barrier that acts by restraining the uncontrolled proliferation of cells carrying potentially oncogenic alterations. ING proteins are putative tumor suppressor proteins functionally linked to the p53 pathway and to chromatin regulation. ING proteins exert their tumor-protective action through different types of responses. Here, we review the evidence on the participation of ING proteins, mainly ING1 and ING2, in the implementation of the senescent response. The currently available data support an important role of ING proteins as regulators of senescence, in connection with the p53 pathway and chromatin organization.
13. Cellular Analogs of Operant Behavior. (United States)
ing of single units can be demonstrated, does such a cellular subset of neighboring pyramidal cells and interneurons as well as process contribute...excite dopamine neurons by -hyperpolarization of local interneurons . J. Neurosci. 12:483-488; 1992. Kosterlitz, H. W. Biosynthesis of morphine in the...II 197 1 1 ocation preltereite iindiis- HOIdlod VA. artdo \\M I . \\.ill I ’’’’i i R i l’)89) ( pioid mediationl lserilI1 reintoree-Cd bK amlphetcamine
14. 5G Ultra-Dense Cellular Networks
Ge, Xiaohu; Tu, Song; Mao, Guoqiang; Wang, Cheng-xiang; Han, Tao
Traditional ultra-dense wireless networks are recommended as a complement for cellular networks and are deployed in partial areas, such as hotspot and indoor scenarios. Based on the massive multiple-input multi-output (MIMO) antennas and the millimeter wavecommunication technologies, the 5G ultra-dense cellular network is proposed to deploy in overall cellular scenarios. Moreover, a distribution network architecture is presented for 5G ultra-dense cellular networks. Furthermore, the backhaul ...
15. Melanoma screening with cellular phones.
Directory of Open Access Journals (Sweden)
Cesare Massone
Full Text Available BACKGROUND: Mobile teledermatology has recently been shown to be suitable for teledermatology despite limitations in image definition in preliminary studies. The unique aspect of mobile teledermatology is that this system represents a filtering or triage system, allowing a sensitive approach for the management of patients with emergent skin diseases. METHODOLOGY/PRINCIPAL FINDINGS: In this study we investigated the feasibility of teleconsultation using a new generation of cellular phones in pigmented skin lesions. 18 patients were selected consecutively in the Pigmented Skin Lesions Clinic of the Department of Dermatology, Medical University of Graz, Graz (Austria. Clinical and dermoscopic images were acquired using a Sony Ericsson with a built-in two-megapixel camera. Two teleconsultants reviewed the images on a specific web application ( where images had been uploaded in JPEG format. Compared to the face-to-face diagnoses, the two teleconsultants obtained a score of correct telediagnoses of 89% and of 91.5% reporting the clinical and dermoscopic images, respectively. CONCLUSIONS/SIGNIFICANCE: The present work is the first study performing mobile teledermoscopy using cellular phones. Mobile teledermatology has the potential to become an easy applicable tool for everyone and a new approach for enhanced self-monitoring for skin cancer screening in the spirit of the eHealth program of the European Commission Information for Society and Media.
16. Cellular functions of the microprocessor. (United States)
Macias, Sara; Cordiner, Ross A; Cáceres, Javier F
The microprocessor is a complex comprising the RNase III enzyme Drosha and the double-stranded RNA-binding protein DGCR8 (DiGeorge syndrome critical region 8 gene) that catalyses the nuclear step of miRNA (microRNA) biogenesis. DGCR8 recognizes the RNA substrate, whereas Drosha functions as an endonuclease. Recent global analyses of microprocessor and Dicer proteins have suggested novel functions for these components independent of their role in miRNA biogenesis. A HITS-CLIP (high-throughput sequencing of RNA isolated by cross-linking immunoprecipitation) experiment designed to identify novel substrates of the microprocessor revealed that this complex binds and regulates a large variety of cellular RNAs. The microprocessor-mediated cleavage of several classes of RNAs not only regulates transcript levels, but also modulates alternative splicing events, independently of miRNA function. Importantly, DGCR8 can also associate with other nucleases, suggesting the existence of alternative DGCR8 complexes that may regulate the fate of a subset of cellular RNAs. The aim of the present review is to provide an overview of the diverse functional roles of the microprocessor.
17. Cellular automata modelling of SEIRS
Institute of Scientific and Technical Information of China (English)
Liu Quan-Xing; Jin Zhen
In this paper the SEIRS epidemic spread is analysed, and a two-dimensional probability cellular automata model for SEIRS is presented. Each cellular automation cell represents a part of the population that may be found in one of five states of individuals: susceptible, exposed (or latency), infected, immunized (or recovered) and death. Here studied are the effects of two cases on the epidemic spread. i.e. the effects of non-segregation and segregation on the latency and the infected of population. The conclusion is reached that the epidemic will persist in the case of non-segregation but it will decrease in the case of segregation. The proposed model can serve as a basis for the development of algorithms to simulate real epidemics based on real data. Last we find the density series of the exposed and the infected will fluctuate near a positive equilibrium point, when the constant for the immunized is less than its corresponding constant τ0. Our theoretical results are verified by numerical simulations.
18. Cellular Senescence and the Biology of Aging, Disease, and Frailty. (United States)
LeBrasseur, Nathan K; Tchkonia, Tamara; Kirkland, James L
Population aging simultaneously highlights the remarkable advances in science, medicine, and public policy, and the formidable challenges facing society. Indeed, aging is the primary risk factor for many of the most common chronic diseases and frailty, which result in profound social and economic costs. Population aging also reveals an opportunity, i.e. interventions to disrupt the fundamental biology of aging could significantly delay the onset of age-related conditions as a group, and, as a result, extend the healthy life span, or health span. There is now considerable evidence that cellular senescence is an underlying mechanism of aging and age-related conditions. Cellular senescence is a process in which cells lose the ability to divide and damage neighboring cells by the factors they secrete, collectively referred to as the senescence-associated secretory phenotype (SASP). Herein, we discuss the concept of cellular senescence, review the evidence that implicates cellular senescence and SASP in age-related deterioration, hyperproliferation, and inflammation, and propose that this underlying mechanism of aging may play a fundamental role in the biology of frailty.
19. Micro/Nanoscale Thermometry for Cellular Thermal Sensing. (United States)
Bai, Tingting; Gu, Ning
Temperature is a key parameter to regulate cell function, and biochemical reactions inside a cell in turn affect the intracellular temperature. It's vitally necessary to measure cellular temperature to provide sufficient information to fully understand life science, while the conventional methods are incompetent. Over the last decade, many ingenious thermometers have been developed with the help of nanotechnology, and real-time intracellular temperature measurement at the micro/nanoscale has been realized with high temporal-spatial resolution. With the help of these techniques, several mechanisms of thermogenesis inside cells have been investigated, even in subcellular organelles. Here, current developments in cellular thermometers are highlighted, and a picture of their applications in cell biology is presented. In particular, temperature measurement principle, thermometer design and latest achievements are also introduced. Finally, the existing opportunities and challenges in this ongoing field are discussed.
20. Kinetic Adaptations of Myosins for Their Diverse Cellular Functions. (United States)
Heissler, Sarah M; Sellers, James R
Members of the myosin superfamily are involved in all aspects of eukaryotic life. Their function ranges from the transport of organelles and cargos to the generation of membrane tension, and the contraction of muscle. The diversity of physiological functions is remarkable, given that all enzymatically active myosins follow a conserved mechanoenzymatic cycle in which the hydrolysis of ATP to ADP and inorganic phosphate is coupled to either actin-based transport or tethering of actin to defined cellular compartments. Kinetic capacities and limitations of a myosin are determined by the extent to which actin can accelerate the hydrolysis of ATP and the release of the hydrolysis products and are indispensably linked to its physiological tasks. This review focuses on kinetic competencies that - together with structural adaptations - result in myosins with unique mechanoenzymatic properties targeted to their diverse cellular functions.
1. Cellular uptake of metallated cobalamins
DEFF Research Database (Denmark)
Tran, Mai Thanh Quynh; Stürup, Stefan; Lambert, Ian Henry
Cellular uptake of vitamin B12-cisplatin conjugates was estimated via detection of their metal constituents (Co, Pt, and Re) by inductively coupled plasma mass spectrometry (ICP-MS). Vitamin B12 (cyano-cob(iii)alamin) and aquo-cob(iii)alamin [Cbl-OH2](+), which differ in the β-axial ligands (CN...... including [Cbl-OH2](+), [{Co}-CN-{cis-PtCl(NH3)2}](+), [{Re}-{Co}-CN-{cis-PtCl(NH3)2}](+), and [{Co}-CN-{trans-Pt(Cyt)(NH3)2}](2+) (Cyt = cytarabin) was high compared to neutral B12, which implied the existence of an additional internalization pathway for charged B12 vitamin analogs. The affinities...
2. Discrete geodesics and cellular automata
CERN Document Server
Arrighi, Pablo
This paper proposes a dynamical notion of discrete geodesics, understood as straightest trajectories in discretized curved spacetime. The notion is generic, as it is formulated in terms of a general deviation function, but readily specializes to metric spaces such as discretized pseudo-riemannian manifolds. It is effective: an algorithm for computing these geodesics naturally follows, which allows numerical validation---as shown by computing the perihelion shift of a Mercury-like planet. It is consistent, in the continuum limit, with the standard notion of timelike geodesics in a pseudo-riemannian manifold. Whether the algorithm fits within the framework of cellular automata is discussed at length. KEYWORDS: Discrete connection, parallel transport, general relativity, Regge calculus.
3. Thermomechanical characterisation of cellular rubber (United States)
Seibert, H.; Scheffer, T.; Diebels, S.
This contribution discusses an experimental possibility to characterise a cellular rubber in terms of the influence of multiaxiality, rate dependency under environmental temperature and its behaviour under hydrostatic pressure. In this context, a mixed open and closed cell rubber based on an ethylene propylene diene monomer is investigated exemplarily. The present article intends to give a general idea of the characterisation method and the considerable effects of this special type of material. The main focus lies on the experimental procedure and the used testing devices in combination with the analysis methods such as true three-dimensional digital image correlation. The structural compressibility is taken into account by an approach for a material model using the Theory of Porous Media with additional temperature dependence.
4. Cellular compartmentalization of secondary metabolism
Directory of Open Access Journals (Sweden)
H. Corby eKistler
Full Text Available Fungal secondary metabolism is often considered apart from the essential housekeeping functions of the cell. However, there are clear links between fundamental cellular metabolism and the biochemical pathways leading to secondary metabolite synthesis. Besides utilizing key biochemical precursors shared with the most essential processes of the cell (e.g. amino acids, acetyl CoA, NADPH, enzymes for secondary metabolite synthesis are compartmentalized at conserved subcellular sites that position pathway enzymes to use these common biochemical precursors. Co-compartmentalization of secondary metabolism pathway enzymes also may function to channel precursors, promote pathway efficiency and sequester pathway intermediates and products from the rest of the cell. In this review we discuss the compartmentalization of three well-studied fungal secondary metabolite biosynthetic pathways for penicillin G, aflatoxin and deoxynivalenol, and summarize evidence used to infer subcellular localization. We also discuss how these metabolites potentially are trafficked within the cell and may be exported.
5. A Life for a Life
Institute of Scientific and Technical Information of China (English)
The English author, Richard Savage, was once living in London ingreat poverty. In order to earn a little money he had written the story" ofhis life. But not many copies of the book had been sold in the shops, and
6. Fundamental Limits to Cellular Sensing (United States)
ten Wolde, Pieter Rein; Becker, Nils B.; Ouldridge, Thomas E.; Mugler, Andrew
In recent years experiments have demonstrated that living cells can measure low chemical concentrations with high precision, and much progress has been made in understanding what sets the fundamental limit to the precision of chemical sensing. Chemical concentration measurements start with the binding of ligand molecules to receptor proteins, which is an inherently noisy process, especially at low concentrations. The signaling networks that transmit the information on the ligand concentration from the receptors into the cell have to filter this receptor input noise as much as possible. These networks, however, are also intrinsically stochastic in nature, which means that they will also add noise to the transmitted signal. In this review, we will first discuss how the diffusive transport and binding of ligand to the receptor sets the receptor correlation time, which is the timescale over which fluctuations in the state of the receptor, arising from the stochastic receptor-ligand binding, decay. We then describe how downstream signaling pathways integrate these receptor-state fluctuations, and how the number of receptors, the receptor correlation time, and the effective integration time set by the downstream network, together impose a fundamental limit on the precision of sensing. We then discuss how cells can remove the receptor input noise while simultaneously suppressing the intrinsic noise in the signaling network. We describe why this mechanism of time integration requires three classes (groups) of resources—receptors and their integration time, readout molecules, energy—and how each resource class sets a fundamental sensing limit. We also briefly discuss the scheme of maximum-likelihood estimation, the role of receptor cooperativity, and how cellular copy protocols differ from canonical copy protocols typically considered in the computational literature, explaining why cellular sensing systems can never reach the Landauer limit on the optimal trade
7. Defining life: the virus viewpoint. (United States)
Forterre, Patrick
Are viruses alive? Until very recently, answering this question was often negative and viruses were not considered in discussions on the origin and definition of life. This situation is rapidly changing, following several discoveries that have modified our vision of viruses. It has been recognized that viruses have played (and still play) a major innovative role in the evolution of cellular organisms. New definitions of viruses have been proposed and their position in the universal tree of life is actively discussed. Viruses are no more confused with their virions, but can be viewed as complex living entities that transform the infected cell into a novel organism-the virus-producing virions. I suggest here to define life (an historical process) as the mode of existence of ribosome encoding organisms (cells) and capsid encoding organisms (viruses) and their ancestors. I propose to define an organism as an ensemble of integrated organs (molecular or cellular) producing individuals evolving through natural selection. The origin of life on our planet would correspond to the establishment of the first organism corresponding to this definition.
8. Defining Life: The Virus Viewpoint (United States)
Forterre, Patrick
9. Biomaterials innovation bundling technologies and life
CERN Document Server
Styhre, A
10. Evolution of Cellular Inclusions in Bietti’s Crystalline Dystrophy
Directory of Open Access Journals (Sweden)
Emiko Furusato
Full Text Available Bietti’s crystalline dystrophy (BCD consists of small, yellow-white, glistening intraretinal crystals in the posterior pole, tapetoretinal degeneration with atrophy of the retinal pigment epithelium (RPE and “sclerosis” of the choroid; in addition, sparking yellow crystals in the superficial marginal cornea are also found in many patients. BCD is inherited as an autosomal-recessive trait (4q35-tel and usually has its onset in the third decade of life. This review focuses on the ultrastructure of cellular crystals and lipid inclusions of BCD.
11. Functional and cellular adaptations of rodent skeletal muscle to weightlessness (United States)
Caiozzo, Vincent J.; Haddad, Fadia; Baker, Michael J.; Baldwin, Kenneth M.
This paper describes the affects of microgravity upon three key cellular levels (functional, protein, and mRNA) that are linked to one another. It is clear that at each of these levels, microgravity produces rapid and substantial alterations. One of the key challenges facing the life science community is the development of effective countermeasures that prevent the loss of muscle function as described in this paper. The development of optimal countermeasures, however, awaits a clearer understanding of events occurring at the levels of transcription, translation, and degradation.
12. Intrinsic Simulations between Stochastic Cellular Automata
Directory of Open Access Journals (Sweden)
Pablo Arrighi
Full Text Available The paper proposes a simple formalism for dealing with deterministic, non-deterministic and stochastic cellular automata in a unifying and composable manner. Armed with this formalism, we extend the notion of intrinsic simulation between deterministic cellular automata, to the non-deterministic and stochastic settings. We then provide explicit tools to prove or disprove the existence of such a simulation between two stochastic cellular automata, even though the intrinsic simulation relation is shown to be undecidable in dimension two and higher. The key result behind this is the caracterization of equality of stochastic global maps by the existence of a coupling between the random sources. We then prove that there is a universal non-deterministic cellular automaton, but no universal stochastic cellular automaton. Yet we provide stochastic cellular automata achieving optimal partial universality.
13. Life Pottery
Institute of Scientific and Technical Information of China (English)
Zhang Wenzhi creates a rich variety of pottery works by coveringpottery roughcasts of different qualities with a range of coloredglazes,patterns and textures.Her works principally reflect differentsocial and personal themes,are not for practical use but moreendorse her interest in pottery,her feelings on life,and a sense ofmodernity.
Institute of Scientific and Technical Information of China (English)
Zhisong JIANG
Limit language complexity of cellular automata which is first posed by S. Wolfram has become a new branch of cellular automata. In this paper, we obtain two interesting relationships between elementary cellular automata of rules 126, 146(182) and 18, and prove that if the limit language of rule 18 is not regular, nor are the limit languages of rules 126 and 146(182).
15. Autophagy and mitophagy in cellular damage control
Directory of Open Access Journals (Sweden)
Jianhua Zhang
Full Text Available Autophagy and mitophagy are important cellular processes that are responsible for breaking down cellular contents, preserving energy and safeguarding against accumulation of damaged and aggregated biomolecules. This graphic review gives a broad summary of autophagy and discusses examples where autophagy is important in controlling protein degradation. In addition we highlight how autophagy and mitophagy are involved in the cellular responses to reactive species and mitochondrial dysfunction. The key signaling pathways for mitophagy are described in the context of bioenergetic dysfunction.
16. Efficiency of cellular information processing
CERN Document Server
Barato, Andre C; Seifert, Udo
We show that a rate of conditional Shannon entropy reduction, characterizing the learning of an internal process about an external process, is bounded by the thermodynamic entropy production. This approach allows for the definition of an informational efficiency that can be used to study cellular information processing. We analyze three models of increasing complexity inspired by the E. coli sensory network, where the external process is an external ligand concentration jumping between two values. We start with a simple model for which ATP must be consumed so that a protein inside the cell can learn about the external concentration. With a second model for a single receptor we show that the rate at which the receptor learns about the external environment can be nonzero even without any dissipation inside the cell since chemical work done by the external process compensates for this learning rate. The third model is more complete, also containing adaptation. For this model we show inter alia that a bacterium i...
17. The cellular toxicity of aluminium. (United States)
Exley, C; Birchall, J D
Aluminium is a serious environmental toxicant and is inimical to biota. Omnipresent, it is linked with a number of disorders in man including Alzheimer's disease, Parkinson's dementia and osteomalacia. Evidence supporting aluminium as an aetiological agent in such disorders is not conclusive and suffers principally from a lack of consensus with respect to aluminium's toxic mode of action. Obligatory to the elucidation of toxic mechanisms is an understanding of the biological availability of aluminium. This describes the fate of and response to aluminium in any biological system and is thus an important influence of the toxicity of aluminium. A general theme in much aluminium toxicity is an accelerated cell death. Herein mechanisms are described to account for cell death from both acute and chronic aluminium challenges. Aluminium associations with both extracellular surfaces and intracellular ligands are implicated. The cellular response to aluminium is found to be biphasic having both stimulatory and inhibitory components. In either case the disruption of second messenger systems is observed and GTPase cycles are potential target sites. Specific ligands for aluminium at these sites are unknown though are likely to be proteins upon which oxygen-based functional groups are orientated to give exceptionally strong binding with the free aluminium ion.
18. Integration of mobile satellite and cellular systems (United States)
Drucker, Elliott H.; Estabrook, Polly; Pinck, Deborah; Ekroot, Laura
By integrating the ground based infrastructure component of a mobile satellite system with the infrastructure systems of terrestrial 800 MHz cellular service providers, a seamless network of universal coverage can be established. Users equipped for both cellular and satellite service can take advantage of a number of features made possible by such integration, including seamless handoff and universal roaming. To provide maximum benefit at lowest posible cost, the means by which these systems are integrated must be carefully considered. Mobile satellite hub stations must be configured to efficiently interface with cellular Mobile Telephone Switching Offices (MTSO's), and cost effective mobile units that provide both cellular and satellite capability must be developed.
19. Optimized Cellular Core for Rotorcraft Project (United States)
National Aeronautics and Space Administration — Patz Materials and Technologies proposes to develop a unique structural cellular core material to improve mechanical performance, reduce platform weight and lower...
Rozhkova, E A; Ulasov, I V; Kim, D-H; Dimitrijevic, N M; Novosad, V; Bader, S D; Lesniak, M S; Rajh, T
Functional nanoscale materials that possess specific physical or chemical properties can leverage energy transduction in vivo. Once these materials integrate with biomolecules they combine physical properties of inorganic material and the biorecognition capabilities of bio-organic moieties. Such nano-bio hybrids can be interfaced with living cells, the elementary functional units of life. These nano-bio systems are capable of bio-manipulation or actuation via altering intracellular biochemical pathways. Thus, nano-bio conjugates are appealing for a wide range of applications from the life sciences and nanomedicine to catalysis and clean energy production. Here we highlight recent progress in our efforts to develop smart nano-bio hybrid materials, and to study their performance within cellular machinery under application of external stimuli, such as light or magnetic fields.
1. Pulsed feedback defers cellular differentiation.
Directory of Open Access Journals (Sweden)
Joe H Levine
2. Integrated Molecular and Cellular Biophysics
CERN Document Server
Raicu, Valerica
3. One life
Directory of Open Access Journals (Sweden)
Demkova E.E.
Full Text Available It is not easy to care for a special needs child. Especially, it is easy to understand parents’ worries about their grown up children. Living in one’s own family or supported living in the community are much more preferable than the options the state can offer. The author — a mother of a young woman with autism — contemplates about possibilities for independent living for a person with special needs after their parents are gone. She is confident that teaching a child skills for independent living is not less important than giving them school education. The author illustrates her thoughts with real examples of support for adults with disabilities in their independent life or life in a foster family in a city, as well in rural areas.
4. Systematic identification of cellular signals reactivating Kaposi sarcoma-associated herpesvirus.
Directory of Open Access Journals (Sweden)
Fuqu Yu
Full Text Available The herpesvirus life cycle has two distinct phases: latency and lytic replication. The balance between these two phases is critical for viral pathogenesis. It is believed that cellular signals regulate the switch from latency to lytic replication. To systematically evaluate the cellular signals regulating this reactivation process in Kaposi sarcoma-associated herpesvirus, the effects of 26,000 full-length cDNA expression constructs on viral reactivation were individually assessed in primary effusion lymphoma-derived cells that harbor the latent virus. A group of diverse cellular signaling proteins were identified and validated in their effect of inducing viral lytic gene expression from the latent viral genome. The results suggest that multiple cellular signaling pathways can reactivate the virus in a genetically homogeneous cell population. Further analysis revealed that the Raf/MEK/ERK/Ets-1 pathway mediates Ras-induced reactivation. The same pathway also mediates spontaneous reactivation, which sets the first example to our knowledge of a specific cellular pathway being studied in the spontaneous reactivation process. Our study provides a functional genomic approach to systematically identify the cellular signals regulating the herpesvirus life cycle, thus facilitating better understanding of a fundamental issue in virology and identifying novel therapeutic targets.
5. Prediction and functional analysis of native disorder in proteins from the three kingdoms of life. (United States)
Ward, J J; Sodhi, J S; McGuffin, L J; Buxton, B F; Jones, D T
An automatic method for recognizing natively disordered regions from amino acid sequence is described and benchmarked against predictors that were assessed at the latest critical assessment of techniques for protein structure prediction (CASP) experiment. The method attains a Wilcoxon score of 90.0, which represents a statistically significant improvement on the methods evaluated on the same targets at CASP. The classifier, DISOPRED2, was used to estimate the frequency of native disorder in several representative genomes from the three kingdoms of life. Putative, long (>30 residue) disordered segments are found to occur in 2.0% of archaean, 4.2% of eubacterial and 33.0% of eukaryotic proteins. The function of proteins with long predicted regions of disorder was investigated using the gene ontology annotations supplied with the Saccharomyces genome database. The analysis of the yeast proteome suggests that proteins containing disorder are often located in the cell nucleus and are involved in the regulation of transcription and cell signalling. The results also indicate that native disorder is associated with the molecular functions of kinase activity and nucleic acid binding.
6. Pumping life
DEFF Research Database (Denmark)
Sitsel, Oleg; Dach, Ingrid; Hoffmann, Robert Daniel
of membrane proteins: P-type ATPase pumps. This article takes the reader on a tour from Aarhus to Copenhagen, from bacteria to plants and humans, and from ions over protein structures to diseases caused by malfunctioning pump proteins. The magazine Nature once titled work published from PUMPKIN ‘Pumping ions......’. Here we illustrate that the pumping of ions means nothing less than the pumping of life....
7. Waning and aging of cellular immunity to Bordetella pertussis. (United States)
van Twillert, Inonge; Han, Wanda G H; van Els, Cécile A C M
While it is clear that the maintenance of Bordetella pertussis-specific immunity evoked both after vaccination and infection is insufficient, it is unknown at which pace waning occurs and which threshold levels of sustained functional memory B and T cells are required to provide long-term protection. Longevity of human cellular immunity to B. pertussis has been studied less extensively than serology, but is suggested to be key for the observed differences between the duration of protection induced by acellular vaccination and whole cell vaccination or infection. The induction and maintenance of levels of protective memory B and T cells may alter with age, associated with changes of the immune system throughout life and with accumulating exposures to circulating B. pertussis or vaccine doses. This is relevant since pertussis affects all age groups. This review summarizes current knowledge on the waning patterns of human cellular immune responses to B. pertussis as addressed in diverse vaccination and infection settings and in various age groups. Knowledge on the effectiveness and flaws in human B. pertussis-specific cellular immunity ultimately will advance the improvement of pertussis vaccination strategies.
8. Cellular Reprogramming Using Defined Factors and MicroRNAs. (United States)
Eguchi, Takanori; Kuboki, Takuo
Development of human bodies, organs, and tissues contains numerous steps of cellular differentiation including an initial zygote, embryonic stem (ES) cells, three germ layers, and multiple expertized lineages of cells. Induced pluripotent stem (iPS) cells have been recently developed using defined reprogramming factors such as Nanog, Klf5, Oct3/4 (Pou5f1), Sox2, and Myc. This outstanding innovation is largely changing life science and medicine. Methods of direct reprogramming of cells into myocytes, neurons, chondrocytes, and osteoblasts have been further developed using modified combination of factors such as N-myc, L-myc, Sox9, and microRNAs in defined cell/tissue culture conditions. Mesenchymal stem cells (MSCs) and dental pulp stem cells (DPSCs) are also emerging multipotent stem cells with particular microRNA expression signatures. It was shown that miRNA-720 had a role in cellular reprogramming through targeting the pluripotency factor Nanog and induction of DNA methyltransferases (DNMTs). This review reports histories, topics, and idea of cellular reprogramming.
9. Recent development of cellular manufacturing systems
Indian Academy of Sciences (India)
P K Arora; A Haleem; M K Singh
Cellular manufacturing system has been proved a vital approach for batch and job shop production systems. Group technology has been an essential tool for developing a cellular manufacturing system. The paper aims to discuss various cell formation techniques and highlights the significant research work done in past over the years and attempts to points out the gap in research.
10. Cellular encoding for interactive evolutionary robotics
NARCIS (Netherlands)
Gruau, F.C.; Quatramaran, K.
This work reports experiments in interactive evolutionary robotics. The goal is to evolve an Artificial Neural Network (ANN) to control the locomotion of an 8-legged robot. The ANNs are encoded using a cellular developmental process called cellular encoding. In a previous work similar experiments ha
11. LMS filters for cellular CDMA overlay
This paper extends and complements previous research we have performed on the performance of nonadaptive narrowband suppression filters when used in cellular CDMA overlay situations. In this paper, an adaptive LMS filter is applied to cellular CDMA overlay situations in order to reject narrowband interference.
12. From Cnn Dynamics to Cellular Wave Computers (United States)
Roska, Tamas
Embedded in a historical overview, the development of the Cellular Wave Computing paradigm is presented, starting from the standard CNN dynamics. The theoretical aspects, the physical implementation, the innovation process, as well as the biological relevance are discussed in details. Finally, the latest developments, the physical versus virtual cellular machines, as well as some open questions are presented.
13. Life on Earth is an individual. (United States)
Hermida, Margarida
Life is a self-maintaining process based on metabolism. Something is said to be alive when it exhibits organization and is actively involved in its own continued existence through carrying out metabolic processes. A life is a spatio-temporally restricted event, which continues while the life processes are occurring in a particular chunk of matter (or, arguably, when they are temporally suspended, but can be restarted at any moment), even though there is continuous replacement of parts. Life is organized in discrete packages, particular cells and multicellular organisms with differing degrees of individuality. Biological species, too, have been shown to be individuals, and not classes, as these collections of organisms are spatio-temporally localized, restricted, continuous, and somewhat cohesive entities, with a definite beginning and end. Assuming that all life on Earth has a common origin, all living organisms, cells, and tissues descending from this origin exhibit continuity of the life processes at the cellular level, as well as many of the features that define the individual character of species: spatio-temporal localization and restriction, continuity, historicity, and cohesiveness. Therefore, life on Earth is an ontological individual. Independent origins of life will have produced other such individuals. These provisionally called 'life-individuals' constitute a category of organization of life which has seldom been recognized. The discovery of at least one independent life-individual would go a long way toward the project of the universality of biology.
14. Life: An Introduction to Complex Systems Biology
CERN Document Server
Kaneko, Kunihiko
What is life? Has molecular biology given us a satisfactory answer to this question? And if not, why, and how to carry on from there? This book examines life not from the reductionist point of view, but rather asks the question: what are the universal properties of living systems and how can one construct from there a phenomenological theory of life that leads naturally to complex processes such as reproductive cellular systems, evolution and differentiation? The presentation has been deliberately kept fairly non-technical so as to address a broad spectrum of students and researchers from the natural sciences and informatics.
15. Measurements of Electromagnetic Fields Emitted from Cellular Base Stations in
Directory of Open Access Journals (Sweden)
K. J. Ali
Full Text Available With increasing the usage of mobile communication devices and internet network information, the entry of private telecommunications companies in Iraq has been started since 2003. These companies began to build up cellular towers to accomplish the telecommunication works but they ignore the safety conditions imposed for the health and environment that are considered in random way. These negative health effects which may cause a health risk for life beings and environment pollution. The aim of this work is to determine the safe and unsafe ranges and discuss damage caused by radiation emitted from Asia cell base stations in Shirqat city and discuses the best ways in which can be minimize its exposure level to avoid its negative health effects. Practical measurements of power density around base stations has been accomplished by using a radiation survey meter type (Radio frequency EMF Strength Meter 480846 in two ways. The first way of measurements has been accomplished at a height of 2 meters above ground for different distances from (0-300 meters .The second way is at a distance of 150 meters for different levels from (2-15 meters above ground level. The maximum measured power density is about (3 mW/m2. Results indicate that the levels of power density are far below the RF radiation exposure of USSR safety standards levels. And that means these cellular base station don't cause negative the health effect for life being if the exposure is within the acceptable international standard levels.
16. The Universe as a Cellular System
CERN Document Server
Aragón-Calvo, Miguel A
Cellular systems are observed everywhere in nature, from crystal domains in metals, soap froth and cucumber cells to the network of cosmological voids. Surprisingly, despite their disparate scale and origin all cellular systems follow certain scaling laws relating their geometry, topology and dynamics. Using a cosmological N-body simulation we found that the Cosmic Web, the largest known cellular system, follows the same scaling relations seen elsewhere in nature. Our results extend the validity of scaling relations in cellular systems by over 30 orders of magnitude in scale with respect to previous studies. The dynamics of cellular systems can be used to interpret local observations such as the local velocity anomaly as the result of a collapsing void in our cosmic backyard. Moreover, scaling relations depend on the curvature of space, providing an independent measure of geometry.
17. The mammary cellular hierarchy and breast cancer. (United States)
Oakes, Samantha R; Gallego-Ortega, David; Ormandy, Christopher J
18. A mathematical model representing cellular immune development and response to Salmonella of chicken intestinal tissue
NARCIS (Netherlands)
Schokker, D.; Bannink, A.; Smits, M.A.; Rebel, J.M.J.
The aim of this study was to create a dynamic mathematical model of the development of the cellular branch of the intestinal immune system of poultry during the first 42 days of life and of its response towards an oral infection with Salmonella enterica serovar Enteritidis. The system elements were
19. The ING tumor suppressors in cellular senescence and chromatin. (United States)
Ludwig, Susann; Klitzsch, Alexandra; Baniahmad, Aria
The Inhibitor of Growth (ING) proteins represent a type II tumor suppressor family comprising five conserved genes, ING1 to ING5. While ING1, ING2 and ING3 proteins are stable components of the mSIN3a-HDAC complexes, the association of ING1, ING4 and ING5 with HAT protein complexes was also reported. Among these the ING1 and ING2 have been analyzed more deeply. Similar to other tumor suppressor factors the ING proteins are also involved in many cellular pathways linked to cancer and cell proliferation such as cell cycle regulation, cellular senescence, DNA repair, apoptosis, inhibition of angiogenesis and modulation of chromatin.A common structural feature of ING factors is the conserved plant homeodomain (PHD), which can bind directly to the histone mark trimethylated lysine of histone H3 (H3K4me3). PHD mutants lose the ability to undergo cellular senescence linking chromatin mark recognition with cellular senescence. ING1 and ING2 are localized in the cell nucleus and associated with chromatin modifying enzymes, linking tumor suppression directly to chromatin regulation. In line with this, the expression of ING1 in tumors is aberrant or identified point mutations are mostly localized in the PHD finger and affect histone binding. Interestingly, ING1 protein levels increase in replicative senescent cells, latter representing an efficient pathway to inhibit cancer proliferation. In association with this, suppression of p33ING1 expression prolongs replicative life span and is also sufficient to bypass oncogene-induced senescence. Recent analyses of ING1- and ING2-deficient mice confirm a tumor suppressive role of ING1 and ING2 and also indicate an essential role of ING2 in meiosis.Here we summarize the activity of ING1 and ING2 as tumor suppressors, chromatin factors and in development.
20. Floating Life
Institute of Scientific and Technical Information of China (English)
@@ One in six people in China have left their hometown in search of a better life and the number continues to grow,creating a challenge for host cities,according to a government report.The floating population,or people who live and work outside their permanent home,reached 211 million last year and the number could reach 350 million by 2050 if govemment policies remain unchanged,said the Report on the Development of China's Floating Population issued on June 26 by the National Population and Family Planning Commission (NPFPC).
1. Markers of cellular senescence. Telomere shortening as a marker of cellular senescence. (United States)
Bernadotte, Alexandra; Mikhelson, Victor M; Spivak, Irina M
The cellular senescence definition comes to the fact of cells irreversible proliferation disability. Besides the cell cycle arrest, senescent cells go through some morphological, biochemical, and functional changes which are the signs of cellular senescence. The senescent cells (including replicative senescence and stress-induced premature senescence) of all the tissues look alike. They are metabolically active and possess the set of characteristics in vitro and in vivo, which are known as biomarkers of aging and cellular senescence. Among biomarkers of cellular senescence telomere shortening is a rather elegant frequently used biomarker. Validity of telomere shortening as a marker for cellular senescence is based on theoretical and experimental data.
2. Life Span Extension and Neuronal Cell Protection by Drosophila Nicotinamidase*S⃞
Balan, Vitaly; Gregory S Miller; Kaplun, Ludmila; Balan, Karina; Chong, Zhao-Zhong; Li, Faqi; Kaplun, Alexander; Mark F A VanBerkum; Arking, Robert; Freeman, D. Carl; Maiese, Kenneth; Tzivion, Guri
The life span of model organisms can be modulated by environmental conditions that influence cellular metabolism, oxidation, or DNA integrity. The yeast nicotinamidase gene pnc1 was identified as a key transcriptional target and mediator of calorie restriction and stress-induced life span extension. PNC1 is thought to exert its effect on yeast life span by modulating cellular nicotinamide and NAD levels, resulting in increased activity of Sir2 family class III histone ...
3. Cellular Cell Bifurcation of Cylindrical Detonations
Institute of Scientific and Technical Information of China (English)
HAN Gui-Lai; JIANG Zong-Lin; WANG Chun; ZHANG Fan
Cellular cell pattern evolution of cylindrically-diverging detonations is numerically simulated successfully by solving two-dimensional Euler equations implemented with an improved two-step chemical kinetic model. From the simulation, three cell bifurcation modes are observed during the evolution and referred to as concave front focusing, kinked and wrinkled wave front instability, and self-merging of cellular cells. Numerical research demonstrates that the wave front expansion resulted from detonation front diverging plays a major role in the cellular cell bifurcation, which can disturb the nonlinearly self-sustained mechanism of detonations and finally lead to cell bifurcations.
4. Optimal Band Allocation for Cognitive Cellular Networks
CERN Document Server
Liu, Tingting
FCC new regulation for cognitive use of the TV white space spectrum provides a new means for improving traditional cellular network performance. But it also introduces a number of technical challenges. This letter studies one of the challenges, that is, given the significant differences in the propagation property and the transmit power limitations between the cellular band and the TV white space, how to jointly utilize both bands such that the benefit from the TV white space for improving cellular network performance is maximized. Both analytical and simulation results are provided.
5. Cryptographic primitives based on cellular transformations
Directory of Open Access Journals (Sweden)
B.V. Izotov
Full Text Available Design of cryptographic primitives based on the concept of cellular automata (CA is likely to be a promising trend in cryptography. In this paper, the improved method performing data transformations by using invertible cyclic CAs (CCA is considered. Besides, the cellular operations (CO as a novel CAs application in the block ciphers are introduced. Proposed CCAs and COs, integrated under the name of cellular transformations (CT, suit well to be used in cryptographic algorithms oriented to fast software and cheap hardware implementation.
6. Imaging in cellular and tissue engineering
CERN Document Server
Yu, Hanry
7. On-Chip Detection of Cellular Activity (United States)
Almog, R.; Daniel, R.; Vernick, S.; Ron, A.; Ben-Yoav, H.; Shacham-Diamand, Y.
The use of on-chip cellular activity monitoring for biological/chemical sensing is promising for environmental, medical and pharmaceutical applications. The miniaturization revolution in microelectronics is harnessed to provide on-chip detection of cellular activity, opening new horizons for miniature, fast, low cost and portable screening and monitoring devices. In this chapter we survey different on-chip cellular activity detection technologies based on electrochemical, bio-impedance and optical detection. Both prokaryotic and eukaryotic cell-on-chip technologies are mentioned and reviewed.
8. Cellular Factors Required for Lassa Virus Budding
Urata, Shuzo; Noda, Takeshi; Kawaoka, Yoshihiro; Yokosawa, Hideyoshi; Yasuda, Jiro
It is known that Lassa virus Z protein is sufficient for the release of virus-like particles (VLPs) and that it has two L domains, PTAP and PPPY, in its C terminus. However, little is known about the cellular factor for Lassa virus budding. We examined which cellular factors are used in Lassa virus Z budding. We demonstrated that Lassa Z protein efficiently produces VLPs and uses cellular factors, Vps4A, Vps4B, and Tsg101, in budding, suggesting that Lassa virus budding uses the multivesicula...
9. Planetary Systems and the Origins of Life (United States)
Pudritz, Ralph; Higgs, Paul; Stone, Jonathon
Preface; Part I. Planetary Systems and the Origins of Life: 1. Observations of extrasolar planetary systems Shay Zucker; 2. The atmospheres of extrasolar planets L. Jeremy Richardson and Sara Seager; 3. Terrestrial planet formation Edward Thommes; 4. Protoplanetary disks, amino acids and the genetic code Paul Higgs and Ralph Pudritz; 5. Emergent phenomena in biology: the origin of cellular life David Deamer; Part II. Life on Earth: 6. Extremophiles: defining the envelope for the search for life in the Universe Lynn Rothschild; 7. Hyperthermophilic life on Earth - and on Mars? Karl Stetter; 8. Phylogenomics: how far back in the past can we go? Henner Brinkmann, Denis Baurain and Hervé Philippe; 9. Horizontal gene transfer, gene histories and the root of the tree of life Olga Zhaxybayeva and J. Peter Gogarten; 10. Evolutionary innovation versus ecological incumbency Adolf Seilacher; 11. Gradual origins for the Metazoans Alexandra Pontefract and Jonathan Stone; Part III. Life in the Solar System?: 12. The search for life on Mars Chris McKay; 13. Life in the dark dune spots of Mars: a testable hypothesis Eörs Szathmary, Tibor Ganti, Tamas Pocs, Andras Horvath, Akos Kereszturi, Szaniszlo Berzci and Andras Sik; 14. Titan: a new astrobiological vision from the Cassini-Huygens data François Raulin; 15. Europa, the Ocean Moon: tides, permeable ice, and life Richard Greenberg; Index.
10. The similarity of life across the universe. (United States)
Cockell, Charles S
Is the hypothesis correct that if life exists elsewhere in the universe, it would have forms and structures unlike anything we could imagine? From the subatomic level in cellular energy acquisition to the assembly and even behavior of organisms at the scale of populations, life on Earth exhibits characteristics that suggest it is a universal norm for life at all levels of hierarchy. These patterns emerge from physical and biochemical limitations. Their potentially universal nature is supported by recent data on the astrophysical abundance and availability of carbon compounds and water. Within these constraints, biochemical and biological variation is certainly possible, but it is limited. If life exists elsewhere, life on Earth, rather than being a contingent product of one specific experiment in biological evolution, is likely to reflect common patterns for the assembly of living matter.
11. A Matrix Construction of Cellular Algebras
Institute of Scientific and Technical Information of China (English)
Dajing Xiang
In this paper, we give a concrete method to construct cellular algebras from matrix algebras by specifying certain fixed matrices for the data of inflations. In particular,orthogonal matrices can be chosen for such data.
12. Cellular Defect May Be Linked to Parkinson's (United States)
... 160862.html Cellular Defect May Be Linked to Parkinson's: Study Abnormality might apply to all forms of ... that may be common to all forms of Parkinson's disease. The defect plays a major role in ...
13. Integration of Mobil Satellite and Cellular Systems (United States)
Drucker, E. H.; Estabrook, P.; Pinck, D.; Ekroot, L.
14. Cellular Automaton Modeling of Pattern Formation
NARCIS (Netherlands)
Boerlijst, M.C.
Book review Andreas Deutsch and Sabine Dormann, Cellular Automaton Modeling of Biological Pattern Formation, Characterization, Applications, and Analysis, Birkhäuser (2005) ISBN 0-8176-4281-1 331pp..
15. Optimized Cellular Core for Rotorcraft Project (United States)
National Aeronautics and Space Administration — Patz Materials and Technologies has developed, produced and tested, as part of the Phase-I SBIR, a new form of composite cellular core material, named Interply Core,...
16. Densities and entropies in cellular automata
CERN Document Server
Guillon, Pierre
Following work by Hochman and Meyerovitch on multidimensional SFT, we give computability-theoretic characterizations of the real numbers that can appear as the topological entropies of one-dimensional and two-dimensional cellular automata.
17. On the Behavior Characteristics of Cellular Automata
Institute of Scientific and Technical Information of China (English)
CHEN Jin-cai; ZHANG Jiang-ling; FENG Dan
In this paper, the inherent relationships between the running regulations and behavior characteristics of cellular automata are presented; an imprecise taxonomy of such systems is put forward; the three extreme cases of stable systems are discussed; and the illogicalness of evolutional strategies of cellular automata is analyzed. The result is suitable for the emulation and prediction of behavior of discrete dynamics systems; especially it can be taken as an important analysis means of dynamic performance of complex networks.
18. Sponging of Cellular Proteins by Viral RNAs
Charley, Phillida A.; Wilusz, Jeffrey
Viral RNAs accumulate to high levels during infection and interact with a variety of cellular factors including miRNAs and RNA-binding proteins. Although many of these interactions exist to directly modulate replication, translation and decay of viral transcripts, evidence is emerging that abundant viral RNAs may in certain cases serve as a sponge to sequester host non coding RNAs and proteins. By effectively reducing the ability of cellular RNA binding proteins to regulate host cell gene exp...
19. Polymersomes containing quantum dots for cellular imaging
Directory of Open Access Journals (Sweden)
Camblin M
Full Text Available Marine Camblin,1 Pascal Detampel,1 Helene Kettiger,1 Dalin Wu,2 Vimalkumar Balasubramanian,1,* Jörg Huwyler1,*1Division of Pharmaceutical Technology, 2Department of Chemistry, University of Basel, Basel, Switzerland*These authors contributed equally to this workAbstract: Quantum dots (QDs are highly fluorescent and stable probes for cellular and molecular imaging. However, poor intracellular delivery, stability, and toxicity of QDs in biological compartments hamper their use in cellular imaging. To overcome these limitations, we developed a simple and effective method to load QDs into polymersomes (Ps made of poly(dimethylsiloxane-poly(2-methyloxazoline (PDMS-PMOXA diblock copolymers without compromising the characteristics of the QDs. These Ps showed no cellular toxicity and QDs were successfully incorporated into the aqueous compartment of the Ps as confirmed by transmission electron microscopy, fluorescence spectroscopy, and fluorescence correlation spectroscopy. Ps containing QDs showed colloidal stability over a period of 6 weeks if stored in phosphate-buffered saline (PBS at physiological pH (7.4. Efficient intracellular delivery of Ps containing QDs was achieved in human liver carcinoma cells (HepG2 and was visualized by confocal laser scanning microscopy (CLSM. Ps containing QDs showed a time- and concentration-dependent uptake in HepG2 cells and exhibited better intracellular stability than liposomes. Our results suggest that Ps containing QDs can be used as nanoprobes for cellular imaging.Keywords: quantum dots, polymersomes, cellular imaging, cellular uptake
20. Recognising life
DEFF Research Database (Denmark)
Nissen, Morten
The author attempts a micro-bio-politics of drugs, starting from an excerpt of an interview with a couple of young drug users in a Copenhagen social youth work facility that pushes harm reduction in 1996. The article is guided by Derrida’s idea of ‘drugs as the religion of atheist poets’ – that t......The author attempts a micro-bio-politics of drugs, starting from an excerpt of an interview with a couple of young drug users in a Copenhagen social youth work facility that pushes harm reduction in 1996. The article is guided by Derrida’s idea of ‘drugs as the religion of atheist poets......–Marxist traditions. The analysis unfolds as an ideology critique that reconstructs, and seeks ways to overcome, particular forms of recognition that are identifiable in the data and in the field of drug practices, and how these form part of the constitution of singular collectives and participants – in these life...... practices, but also in the research practice that engaged with them through the interview....
1. Optimization of Inter Cellular Movement of Parts in Cellular Manufacturing System Using Genetic Algorithm
Directory of Open Access Journals (Sweden)
Siva Prasad Darla
Full Text Available In the modern manufacturing environment, Cellular Manufacturing Systems (CMS have gained greater importance in job shop or batch-type production to gain economic advantage similar to those of mass production. Successful implementation of CMS highly depends on the determination of part families; machine cells and minimizing inter cellular movement. This study considers machine component grouping problems namely inter-cellular movement and cell load variation by developing a mathematical model and optimizing the solution using Genetic Algorithm to arrive at a cell formation to minimize the inter-cellular movement and cell load variation. The results are presented with a numerical example.
2. Photonics for life. (United States)
Cubeddu, Rinaldo; Bassi, Andrea; Comelli, Daniela; Cova, Sergio; Farina, Andrea; Ghioni, Massimo; Rech, Ivan; Pifferi, Antonio; Spinelli, Lorenzo; Taroni, Paola; Torricelli, Alessandro; Tosi, Alberto; Valentini, Gianluca; Zappa, Franco
Light is strictly connected with life, and its presence is fundamental for any living environment. Thus, many biological mechanisms are related to light interaction or can be evaluated through processes involving energy exchange with photons. Optics has always been a precious tool to evaluate molecular and cellular mechanisms, but the discovery of lasers opened new pathways of interactions of light with biological matter, pushing an impressive development for both therapeutic and diagnostic applications in biomedicine. The use of light in different fields has become so widespread that the word photonics has been utilized to identify all the applications related to processes where the light is involved. The photonics area covers a wide range of wavelengths spanning from soft X-rays to mid-infrared and includes all devices related to photons as light sources, optical fibers and light guides, detectors, and all the related electronic equipment. The recent use of photons in the field of telecommunications has pushed the technology toward low-cost, compact, and efficient devices, making them available for many other applications, including those related to biology and medicine where these requirements are of particular relevance. Moreover, basic sciences such as physics, chemistry, mathematics, and electronics have recognized the interdisciplinary need of biomedical science and are translating the most advanced researches into these fields. The Politecnico school has pioneered many of them,and this article reviews the state of the art of biomedical research at the Politecnico in the field internationally known as biophotonics.
3. Reproduction and love: strategies of the organism's cellular defense system? (United States)
De Loof, A; Huybrechts, R; Kotanen, S
A novel view is presented which states that primordial germ cells and their descendants can be regarded as 'cancerous cells' which emit signals that activate a whole array of cellular defensive mechanisms by the somatoplasm. These cells have become unrestrained in response to the lack of typical cell adhesion properties of epithelial cells. From this point of view: (1) the encapsulation of oocytes by follicle cells, vitelline membrane and egg shell; (2) the suppression of gonadal development in larval life; (3) the production of sex steroid hormones and of vitellogenin; and (4) the expulsion of the gametes from the body fit into a general framework for a defense strategy of the somatoplasm against germ line cells. Accordingly, the origin of sexual reproduction appears to be a story of failure and intercellular hostility rather than a 'romantic' and altruistic event. Yet, it has resulted in evolutionary success for the system in which it has evolved; probably through realizing feelings of 'pleasure' associated with reproduction.
4. Crack Propagation in Honeycomb Cellular Materials: A Computational Approach
Directory of Open Access Journals (Sweden)
Marco Paggi
Full Text Available Computational models based on the finite element method and linear or nonlinear fracture mechanics are herein proposed to study the mechanical response of functionally designed cellular components. It is demonstrated that, via a suitable tailoring of the properties of interfaces present in the meso- and micro-structures, the tensile strength can be substantially increased as compared to that of a standard polycrystalline material. Moreover, numerical examples regarding the structural response of these components when subjected to loading conditions typical of cutting operations are provided. As a general trend, the occurrence of tortuous crack paths is highly favorable: stable crack propagation can be achieved in case of critical crack growth, whereas an increased fatigue life can be obtained for a sub-critical crack propagation.
5. Cellular Concrete Bricks with Recycled Expanded Polystyrene Aggregate
Directory of Open Access Journals (Sweden)
Juan Bosco Hernández-Zaragoza
Full Text Available Cellular concrete bricks were obtained by using a lightweight mortar with recycled expanded polystyrene aggregate instead of sandy materials. After determining the block properties (absorption, compressive strength, and tensile stresses, it was found that this brick meets the requirements of the masonry standards used in Mexico. The obtained material is lighter than the commercial ones, which facilitates their rapid elaboration, quality control, and transportation. It is less permeable, which helps prevent moisture formation retaining its strength due to the greater adherence shown with dry polystyrene. It was more flexible, which makes it less vulnerable to cracking walls due to soil displacements. Furthermore, it is economical, because it uses recyclable material and has properties that prevent deterioration increasing its useful life. We recommend the use of the fully dry EP under a dry environment to obtain the best properties of brick.
6. Cellular and Molecular Biological Approaches to Interpreting Ancient Biomarkers (United States)
Newman, Dianne K.; Neubauer, Cajetan; Ricci, Jessica N.; Wu, Chia-Hung; Pearson, Ann
Our ability to read the molecular fossil record has advanced significantly in the past decade. Improvements in biomarker sampling and quantification methods, expansion of molecular sequence databases, and the application of genetic and cellular biological tools to problems in biomarker research have enabled much of this progress. By way of example, we review how attempts to understand the biological function of 2-methylhopanoids in modern bacteria have changed our interpretation of what their molecular fossils tell us about the early history of life. They were once thought to be biomarkers of cyanobacteria and hence the evolution of oxygenic photosynthesis, but we now believe that 2-methylhopanoid biosynthetic capacity originated in the Alphaproteobacteria, that 2-methylhopanoids are regulated in response to stress, and that hopanoid 2-methylation enhances membrane rigidity. We present a new interpretation of 2-methylhopanes that bridges the gap between studies of the functions of 2-methylhopanoids and their patterns of occurrence in the rock record.
7. Mosquito population dynamics from cellular automata-based simulation (United States)
Syafarina, Inna; Sadikin, Rifki; Nuraini, Nuning
In this paper we present an innovative model for simulating mosquito-vector population dynamics. The simulation consist of two stages: demography and dispersal dynamics. For demography simulation, we follow the existing model for modeling a mosquito life cycles. Moreover, we use cellular automata-based model for simulating dispersal of the vector. In simulation, each individual vector is able to move to other grid based on a random walk. Our model is also capable to represent immunity factor for each grid. We simulate the model to evaluate its correctness. Based on the simulations, we can conclude that our model is correct. However, our model need to be improved to find a realistic parameters to match real data.
8. Shape Memory Alloy-Based Periodic Cellular Structures Project (United States)
National Aeronautics and Space Administration — This SBIR Phase I effort will develop and demonstrate an innovative shape memory alloy (SMA) periodic cellular structural technology. Periodic cellular structures...
9. Characterizing heterogeneous cellular responses to perturbations. (United States)
Slack, Michael D; Martinez, Elisabeth D; Wu, Lani F; Altschuler, Steven J
Cellular populations have been widely observed to respond heterogeneously to perturbation. However, interpreting the observed heterogeneity is an extremely challenging problem because of the complexity of possible cellular phenotypes, the large dimension of potential perturbations, and the lack of methods for separating meaningful biological information from noise. Here, we develop an image-based approach to characterize cellular phenotypes based on patterns of signaling marker colocalization. Heterogeneous cellular populations are characterized as mixtures of phenotypically distinct subpopulations, and responses to perturbations are summarized succinctly as probabilistic redistributions of these mixtures. We apply our method to characterize the heterogeneous responses of cancer cells to a panel of drugs. We find that cells treated with drugs of (dis-)similar mechanism exhibit (dis-)similar patterns of heterogeneity. Despite the observed phenotypic diversity of cells observed within our data, low-complexity models of heterogeneity were sufficient to distinguish most classes of drug mechanism. Our approach offers a computational framework for assessing the complexity of cellular heterogeneity, investigating the degree to which perturbations induce redistributions of a limited, but nontrivial, repertoire of underlying states and revealing functional significance contained within distinct patterns of heterogeneous responses.
10. Complexity, dynamic cellular network, and tumorigenesis. (United States)
Waliszewski, P
A holistic approach to tumorigenesis is proposed. The main element of the model is the existence of dynamic cellular network. This network comprises a molecular and an energetistic structure of a cell connected through the multidirectional flow of information. The interactions within dynamic cellular network are complex, stochastic, nonlinear, and also involve quantum effects. From this non-reductionist perspective, neither tumorigenesis can be limited to the genetic aspect, nor the initial event must be of molecular nature, nor mutations and epigenetic factors are mutually exclusive, nor a link between cause and effect can be established. Due to complexity, an unstable stationary state of dynamic cellular network rather than a group of unrelated genes determines the phenotype of normal and transformed cells. This implies relativity of tumor suppressor genes and oncogenes. A bifurcation point is defined as an unstable state of dynamic cellular network leading to the other phenotype-stationary state. In particular, the bifurcation point may be determined by a change of expression of a single gene. Then, the gene is called bifurcation point gene. The unstable stationary state facilitates the chaotic dynamics. This may result in a fractal dimension of both normal and tumor tissues. The co-existence of chaotic dynamics and complexity is the essence of cellular processes and shapes differentiation, morphogenesis, and tumorigenesis. In consequence, tumorigenesis is a complex, unpredictable process driven by the interplay between self-organisation and selection.
11. Redox regulation of SIRT1 in inflammation and cellular senescence. (United States)
Hwang, Jae-woong; Yao, Hongwei; Caito, Samuel; Sundar, Isaac K; Rahman, Irfan
Sirtuin 1 (SIRT1) regulates inflammation, aging (life span and health span), calorie restriction/energetics, mitochondrial biogenesis, stress resistance, cellular senescence, endothelial functions, apoptosis/autophagy, and circadian rhythms through deacetylation of transcription factors and histones. SIRT1 level and activity are decreased in chronic inflammatory conditions and aging, in which oxidative stress occurs. SIRT1 is regulated by a NAD(+)-dependent DNA repair enzyme, poly(ADP-ribose) polymerase-1 (PARP1), and subsequent NAD(+) depletion by oxidative stress may have consequent effects on inflammatory and stress responses as well as cellular senescence. SIRT1 has been shown to undergo covalent oxidative modifications by cigarette smoke-derived oxidants/aldehydes, leading to posttranslational modifications, inactivation, and protein degradation. Furthermore, oxidant/carbonyl stress-mediated reduction of SIRT1 leads to the loss of its control on acetylation of target proteins including p53, RelA/p65, and FOXO3, thereby enhancing the inflammatory, prosenescent, and apoptotic responses, as well as endothelial dysfunction. In this review, the mechanisms of cigarette smoke/oxidant-mediated redox posttranslational modifications of SIRT1 and its roles in PARP1 and NF-κB activation, and FOXO3 and eNOS regulation, as well as chromatin remodeling/histone modifications during inflammaging, are discussed. Furthermore, we have also discussed various novel ways to activate SIRT1 either directly or indirectly, which may have therapeutic potential in attenuating inflammation and premature senescence involved in chronic lung diseases.
12. Halophilic life on Mars ? (United States)
Stan-Lotter, Helga; Fendrihan, Sergiu; Dornmayr-Pfaffenhuemer, Marion; Holzinger, Anita; Polacsek, Tatjana K.; Legat, Andrea; Grösbacher, Michael; Weigl, Andreas
Background: The search for extraterrestrial life has been declared as a goal for the 21th century by several space agencies. Potential candidates are microorganisms on or in the surface of moons and planets, such as Mars. Extremely halophilic archaea (haloarchaea) are of astrobiological interest since viable strains have been isolated from million years old salt deposits (1) and halite has been found in Martian meteorites and in surface pools. Therefore, haloarchaeal responses to simulated and real space conditions were explored. Immuno assays for a potential Life Marker Chip experiment were developed with antisera against the universal enzyme ATP synthase. Methods: The focus of these studies was on the application of fluorescent probes since they provide strong signals, and detection devices are suitable for miniaturization. Viability of haloarchaeal strains (Halococcus dombrowskii and Halobacterium salinarum NRC-1) was probed with the LIVE/DEAD BacLight™ kit and the BacLight™ Bacterial Membrane Potential kit. Cyclobutane pyrimidine dimers (CPD) in the DNA, following exposure to simulated and real space conditions (UV irradiation from 200 - 400 nm; 18 months exposure on the International Space Station [ISS] within the ADAPT experiment by Dr. P. Rettberg), were detected with fluorescent Alexa-Fluor-488-coupled antibodies. Immuno assays with antisera against the A-ATPase subunits from Halorubrum saccharovorum were carried out with the highly sensitive Immun-Star ™ WesternC ™ chemiluminescent kit (Bio-Rad). Results: Using the LIVE/DEAD BacLight™ kit, the D37 (dose of 37% survival) for Hcc. dombrowskii and Hbt. salinarum NRC-1, following exposure to UV (200-400 nm) was about 400 kJ/m2, when cells were embedded in halite and about 1 kJ/m2, when cells were in liquid cultures. Fluorescent staining indicated a slightly higher cellular activity than that which was derived from the determination of colony forming units. Assessment of viability with the Bac
13. DNA Methylation, Behavior and Early Life Adversity
Institute of Scientific and Technical Information of China (English)
Moshe Szyf
The impact of early physical and social environments on life-long phenotypes is well known.Moreover,we have documented evidence for gene-enviromnent interactions where identical gene variants are associated with different phenotypes that are dependent on early life adversity.What are the mechanisms that embed these early life experiences in the genome? DNA methylation is an enzymaticallycatalyzed modification of DNA that serves as a mechanism by which similar sequences acquire cell type identity during cellular differentiation and embryogenesis in the same individual.The hypothesis that will be discussed here proposes that the same mechanism confers environmental-exposure specific identity upon DNA providing a mechanism for embedding environmental experiences in the genome,thus affecting long-term phenotypes.Particularly important is the environment early in life including both the prenatal and postnatal social environments.
14. Online isolation of defects in cellular nanocomputers
Institute of Scientific and Technical Information of China (English)
Teijiro Isokawa; Shin'ya Kowada; Ferdinand Peper; Naotake Kamiura; Nobuyuki Matsui
Unreliability will be a major issue for computers built from components at nanometer scales.Thus,it's to be expected that such computers will need a high degree of defect-tolerance to overcome components' defects which have arisen during the process of manufacturing.This paper presents a novel approach to defect-tolerance that is especially geared towards nanocomputers based on asynchronous cellular automata.According to this approach,defective cells are detected and isolated by small configurations that move around randomly in cellular space.These configurations,called random flies,will attach to configurations that are static,which is typical for configurations that contain defective cells.On the other hand,dynamic configurations,like those that conduct computations,will not be isolated from the rest of the cellular space by the random flies,and will be able to continue their operations unaffectedly.
15. Cellular Signaling in Health and Disease
CERN Document Server
Beckerman, Martin
In today’s world, three great classes of non-infectious diseases – the metabolic syndromes (such as type 2 diabetes and atherosclerosis), the cancers, and the neurodegenerative disorders – have risen to the fore. These diseases, all associated with increasing age of an individual, have proven to be remarkably complex and difficult to treat. This is because, in large measure, when the cellular signaling pathways responsible for maintaining homeostasis and health of the body become dysregulated, they generate equally stable disease states. As a result the body may respond positively to a drug, but only for a while and then revert back to the disease state. Cellular Signaling in Health and Disease summarizes our current understanding of these regulatory networks in the healthy and diseased states, showing which molecular components might be prime targets for drug interventions. This is accomplished by presenting models that explain in mechanistic, molecular detail how a particular part of the cellular sign...
16. Software-Defined Cellular Mobile Network Solutions
Institute of Scientific and Technical Information of China (English)
Jiandong Li; Peng Liu; Hongyan Li
The emergency relating to software-defined networking (SDN), especially in terms of the prototype associated with OpenFlow, pro-vides new possibilities for innovating on network design. Researchers have started to extend SDN to cellular networks. Such new programmable architecture is beneficial to the evolution of mobile networks and allows operators to provide better services. The typical cellular network comprises radio access network (RAN) and core network (CN); hence, the technique roadmap diverges in two ways. In this paper, we investigate SoftRAN, the latest SDN solution for RAN, and SoftCell and MobileFlow, the latest solu-tions for CN. We also define a series of control functions for CROWD. Unlike in the other literature, we emphasize only software-defined cellular network solutions and specifications in order to provide possible research directions.
17. Infrared image enhancement using Cellular Automata (United States)
Qi, Wei; Han, Jing; Zhang, Yi; Bai, Lian-fa
Image enhancement is a crucial technique for infrared images. The clear image details are important for improving the quality of infrared images in computer vision. In this paper, we propose a new enhancement method based on two priors via Cellular Automata. First, we directly learn the gradient distribution prior from the images via Cellular Automata. Second, considering the importance of image details, we propose a new gradient distribution error to encode the structure information via Cellular Automata. Finally, an iterative method is applied to remap the original image based on two priors, further improving the quality of enhanced image. Our method is simple in implementation, easy to understand, extensible to accommodate other vision tasks, and produces more accurate results. Experiments show that the proposed method performs better than other methods using qualitative and quantitative measures.
18. Asymptotic Behavior of Excitable Cellular Automata
CERN Document Server
Durrett, R; Durrett, Richard; Griffeath, David
Abstract: We study two families of excitable cellular automata known as the Greenberg-Hastings Model (GHM) and the Cyclic Cellular Automaton (CCA). Each family consists of local deterministic oscillating lattice dynamics, with parallel discrete-time updating, parametrized by the range of interaction, the "shape" of its neighbor set, threshold value for contact updating, and number of possible states per site. GHM and CCA are mathematically tractable prototypes for the spatially distributed periodic wave activity of so-called excitable media observed in diverse disciplines of experimental science. Earlier work by Fisch, Gravner, and Griffeath studied the ergodic behavior of these excitable cellular automata on Z^2, and identified two distinct (but closely-related) elaborate phase portraits as the parameters vary. In particular, they noted the emergence of asymptotic phase diagrams (and Euclidean dynamics) in a well-defined threshold-range scaling limit. In this study we present several rigorous results and som...
19. Spin Echo Studies on Cellular Water
CERN Document Server
Chang, D C; Nichols, B L; Rorschach, H E
Previous studies indicated that the physical state of cellular water could be significantly different from pure liquid water. To experimentally investigate this possibility, we conducted a series of spin-echo NMR measurements on water protons in rat skeletal muscle. Our result indicated that the spin-lattice relaxation time and the spin-spin relaxation time of cellular water protons are both significantly shorter than that of pure water (by 4.3-fold and 34-fold, respectively). Furthermore, the spin diffusion coefficient of water proton is almost 1/2 of that of pure water. These data suggest that cellular water is in a more ordered state in comparison to pure water.
20. Cellular biosensing: chemical and genetic approaches. (United States)
Haruyama, Tetsuya
Biosensors have been developed to determine the concentration of specific compounds in situ. They are already widely employed as a practical technology in the clinical and healthcare fields. Recently, another concept of biosensing has been receiving attention: biosensing for the evaluation of molecular potency. The development of this novel concept has been supported by the development of related technologies, as such as molecular design, molecular biology (genetic engineering) and cellular/tissular engineering. This review is addresses this new concept of biosensing and its application to the evaluation of the potency of chemicals in biological systems, in the field of cellular/tissular engineering. Cellular biosensing may provide information on both pharmaceutical and chemical safety, and on drug efficacy in vitro as a screening tool.
1. Crack Propagation in Bamboo's Hierarchical Cellular Structure (United States)
Habibi, Meisam K.; Lu, Yang
Bamboo, as a natural hierarchical cellular material, exhibits remarkable mechanical properties including excellent flexibility and fracture toughness. As far as bamboo as a functionally graded bio-composite is concerned, the interactions of different constituents (bamboo fibers; parenchyma cells; and vessels.) alongside their corresponding interfacial areas with a developed crack should be of high significance. Here, by using multi-scale mechanical characterizations coupled with advanced environmental electron microscopy (ESEM), we unambiguously show that fibers' interfacial areas along with parenchyma cells' boundaries were preferred routes for crack growth in both radial and longitudinal directions. Irrespective of the honeycomb structure of fibers along with cellular configuration of parenchyma ground, the hollow vessels within bamboo culm affected the crack propagation too, by crack deflection or crack-tip energy dissipation. It is expected that the tortuous crack propagation mode exhibited in the present study could be applicable to other cellular natural materials as well.
2. Alleviate Cellular Congestion Through Opportunistic Trough Filling
Directory of Open Access Journals (Sweden)
Yichuan Wang
Full Text Available The demand for cellular data service has been skyrocketing since the debut of data-intensive smart phones and touchpads. However, not all data are created equal. Many popular applications on mobile devices, such as email synchronization and social network updates, are delay tolerant. In addition, cellular load varies significantly in both large and small time scales. To alleviate network congestion and improve network performance, we present a set of opportunistic trough filling schemes that leverage the time-variation of network congestion and delay-tolerance of certain traffic in this paper. We consider average delay, deadline, and clearance time as the performance metrics. Simulation results show promising performance improvement over the standard schemes. The work shed lights on addressing the pressing issue of cellular overload.
3. Cellularity of certain quantum endomorphism algebras
DEFF Research Database (Denmark)
Andersen, Henning Haahr; Lehrer, G. I.; Zhang, R.
Let $\\tA=\\Z[q^{\\pm \\frac{1}{2}}][([d]!)\\inv]$ and let $\\Delta_{\\tA}(d)$ be an integral form of the Weyl module of highest weight $d \\in \\N$ of the quantised enveloping algebra $\\U_{\\tA}$ of $\\fsl_2$. We exhibit for all positive integers $r$ an explicit cellular structure for $\\End...... of endomorphism algebras, and another which relates the multiplicities of indecomposable summands to the dimensions of simple modules for an endomorphism algebra. Our cellularity result then allows us to prove that knowledge of the dimensions of the simple modules of the specialised cellular algebra above...... is equivalent to knowledge of the weight multiplicities of the tilting modules for $\\U_{\\zeta}(\\fsl_2)$. In the final section we independently determine the weight multiplicities of indecomposable tilting modules for $U_\\zeta(\\fsl_2)$ and the decomposition numbers of the endomorphism algebras. We indicate how...
4. Performance comparison of virtual cellular manufacturing with functional and cellular layouts in DRC settings
NARCIS (Netherlands)
Suresh, N.; Slomp, J.
This study investigates the performance of virtual cellular manufacturing (VCM) systems, comparing them with functional layouts (FL) and traditional, physical cellular layout (CL), in a dual-resource-constrained (DRC) system context. VCM systems employ logical cells, retaining the process layouts of
5. Virtual networks in the cellular domain
Söderström, Gustav
Data connectivity between cellular devices can be achieved in different ways. It is possible to enable full IPconnectivity in the cellular networks. However this connectivity is combined with a lot of issues such as security problems and the IPv4 address space being depleted. As a result of this many operators use Network Address Translation in their packet data networks, preventing users in different networks from being able to contact each other. Even if a transition to IPv6 takes place an...
6. The cellular decision between apoptosis and autophagy
Institute of Scientific and Technical Information of China (English)
Yong-Jun Fan; Wei-Xing Zong
Apoptosis and autophagy are important molecular processes that maintain organismal and cellular homeostasis,respectively.While apoptosis fulfills its role through dismantling damaged or unwanted cells,autophagy maintains cellular homeostasis through recycling selective intracellular organelles and molecules.Yet in some conditions,autophagy can lead to cell death.Apoptosis and autophagy can be stimulated by the same stresses.Emerging evidence indicates an interplay between the core proteins in both pathways,which underlies the molecular mechanism of the crosstalk between apoptosis and autophagy.This review summarizes recent literature on molecules that regulate both the apoptotic and autophagic processes.
7. Cellular basis of Alzheimer′s disease
Directory of Open Access Journals (Sweden)
Bali Jitin
8. Cellular basis of Alzheimer’s disease (United States)
9. Cellular-based sea level gauge
Digital Repository Service at National Institute of Oceanography (India)
Desai, R.G.P.; Joseph, A.
, and cellular modem are mounted on the top portion of this structure. The pressure sensor and the logger are continuously powered on, and their electrical current consumption is 30 and 15 mA respectively. The cellular modem consumes 15 mA and 250 mA during... standby and data transmission modes, respectively. The pressure sensor located below the low-tide level measures the hydrostatic pressure of the overlying water layer. An indigenously designed and developed microprocessor-based data logger interrogates...
10. Refining cellular automata with routing constraints
Millo, Jean-Vivien; De Simone, Robert
A cellular automaton (CA) is an infinite array of cells, each containing the same automaton. The dynamics of a CA is distributed over the cells where each computes its next state as a function of the previous states of its neighborhood. Thus, the transmission of such states between neighbors is considered as feasible directly, in no time. When considering the implementation of a cellular automaton on a many-cores System-on-Chip (SoC), this state transmission is no longer abstract and instanta...
11. Cellular telephone use and cancer risk
DEFF Research Database (Denmark)
-up of a large nationwide cohort of 420,095 persons whose first cellular telephone subscription was between 1982 and 1995 and who were followed through 2002 for cancer incidence. Standardized incidence ratios (SIRs) were calculated by dividing the number of observed cancer cases in the cohort by the number....... The risk for smoking-related cancers was decreased among men (SIR = 0.88, 95% CI = 0.86 to 0.91) but increased among women (SIR = 1.11, 95% CI = 1.02 to 1.21). Additional data on income and smoking prevalence, primarily among men, indicated that cellular telephone users who started subscriptions in the mid...
12. External insulation with cellular plastic materials
DEFF Research Database (Denmark)
Sørensen, Lars Schiøtt; Nielsen, Anker
External thermal insulation composite systems (ETICS) can be used as extra insulation of existing buildings. The system can be made of cellular plastic materials or mineral wool. There is a European Technical guideline, ETAG 004, that describe the tests that shall be conducted on such systems....... This paper gives a comparison of systems with mineral wool and cellular plastic, based on experience from practice and literature. It is important to look at the details in the system and at long time stability of the properties such as thermal insulation, moisture and fire. Investigation of fire properties...... insulation....
13. Toxicology and cellular effect of manufactured nanomaterials (United States)
Chen, Fanqing
The increasing use of nanotechnology in consumer products and medical applications underlies the importance of understanding its potential toxic effects to people and the environment. Herein are described methods and assays to predict and evaluate the cellular effects of nanomaterial exposure. Exposing cells to nanomaterials at cytotoxic doses induces cell cycle arrest and increases apoptosis/necrosis, activates genes involved in cellular transport, metabolism, cell cycle regulation, and stress response. Certain nanomaterials induce genes indicative of a strong immune and inflammatory response within skin fibroblasts. Furthermore, the described multiwall carbon nanoonions (MWCNOs) can be used as a therapeutic in the treatment of cancer due to its cytotoxicity.
Institute of Scientific and Technical Information of China (English)
Shan Lianhai; Ouyang Yuling; Yuan Zhi; Fang Weidong; Hu Honglin
Wireless Sensor Networks (WSNs) have been applied in many different areas.Energy etficient algorithms and protocols have become one of the most challenging issues for WSN.Many researchers focused on developing energy efficient clustering algorithms for WSN,but less research has been concerned in the mobile User Equipment (UE) acting as a Cluster Head (CH) for data transmission between cellular networks and WSNs.In this paper,we propose a cellular-assisted UE CH selection algorithm for the WSN,which considers several parameters to choose the optimal UE gateway CH.We analyze the energy cost of data transmission from a sensor node to the next node or gateway and calculate the whole system energy cost for a WSN.Simulation results show that better system performance,in terms of system energy cost and WSNs life time,can be achieved by using interactive optimization with cellular networks.
15. The coevolutionary roots of biochemistry and cellular organization challenge the RNA world paradigm. (United States)
Caetano-Anollés, Gustavo; Seufferheld, Manfredo J
The origin and evolution of modern biochemistry and cellular structure is a complex problem that has puzzled scientists for almost a century. While comparative, functional and structural genomics has unraveled considerable complexity at the molecular level, there is very little understanding of the origin, evolution and structure of the molecules responsible for cellular or viral features in life. Recent efforts, however, have dissected the emergence of the very early molecules that populated primordial cells. Deep historical signal was retrieved from a census of molecular structures and functions in thousands of nucleic acid and protein structures and hundreds of genomes using powerful phylogenomic methods. Together with structural, chemical and cell biology considerations, this information reveals that modern biochemistry is the result of the gradual evolutionary appearance and accretion of molecular parts and molecules. These patterns comply with the principle of continuity and lead to molecular and cellular complexity. Here, we review findings and report possible origins of molecular and cellular structure, the early rise of lipid biosynthetic pathways and components of cytoskeletal microstructures, the piecemeal accumulation of domains in ATP synthase complexes and the origin and evolution of the ribosome. Phylogenomic studies suggest the last universal common ancestor of life, the 'urancestor', had already developed complex cellular structure and bioenergetics. Remarkably, our findings falsify the existence of an ancient RNA world. Instead they are compatible with gradually coevolving nucleic acids and proteins in interaction with increasingly complex cofactors, lipid membrane structures and other cellular components. This changes the perception we have of the rise of modern biochemistry and prompts further analysis of the emergence of biological complexity in an ever-expanding coevolving world of macromolecules.
16. Cellular chain formation in Escherichia coli biofilms
DEFF Research Database (Denmark)
Vejborg, Rebecca Munk; Klemm, Per
17. Cellular grafts in management of leucoderma
Directory of Open Access Journals (Sweden)
Mysore Venkataram
Full Text Available Cellular grafting methods constitute important advances in the surgical management of leucoderma. Different methods such as noncultured epidermal suspensions, melanocyte cultures, and melanocyte-keratinocyte cultures have all been shown to be effective. This article reviews these methods.
18. Cellular basis of memory for addiction. (United States)
Nestler, Eric J
DESPITE THE IMPORTANCE OF NUMEROUS PSYCHOSOCIAL FACTORS, AT ITS CORE, DRUG ADDICTION INVOLVES A BIOLOGICAL PROCESS: the ability of repeated exposure to a drug of abuse to induce changes in a vulnerable brain that drive the compulsive seeking and taking of drugs, and loss of control over drug use, that define a state of addiction. Here, we review the types of molecular and cellular adaptations that occur in specific brain regions to mediate addiction-associated behavioral abnormalities. These include alterations in gene expression achieved in part via epigenetic mechanisms, plasticity in the neurophysiological functioning of neurons and synapses, and associated plasticity in neuronal and synaptic morphology mediated in part by altered neurotrophic factor signaling. Each of these types of drug-induced modifications can be viewed as a form of "cellular or molecular memory." Moreover, it is striking that most addiction-related forms of plasticity are very similar to the types of plasticity that have been associated with more classic forms of "behavioral memory," perhaps reflecting the finite repertoire of adaptive mechanisms available to neurons when faced with environmental challenges. Finally, addiction-related molecular and cellular adaptations involve most of the same brain regions that mediate more classic forms of memory, consistent with the view that abnormal memories are important drivers of addiction syndromes. The goal of these studies which aim to explicate the molecular and cellular basis of drug addiction is to eventually develop biologically based diagnostic tests, as well as more effective treatments for addiction disorders.
19. Cellular Plasticity in Prostate Cancer Bone Metastasis
Directory of Open Access Journals (Sweden)
Dima Y. Jadaan
20. Corneal cellular proliferation and wound healing
Gan, Lisha
Background. Cellular proliferation plays an important role in both physiological and pathological processes. Epithelial hyperplasia in the epithelium, excessive scar formation in retrocorneal membrane formation and neovascularization are examples of excessive proliferation of cornea cells. Lack of proliferative ability causes corneal degeneration. The degree of proliferative and metabolic activity will directly influence corneal transparency and very evidently refractive res...
1. A Quantum Relativistic Prisoner's Dilemma Cellular Automaton (United States)
Alonso-Sanz, Ramón; Carvalho, Márcio; Situ, Haozhen
The effect of variable entangling on the dynamics of a spatial quantum relativistic formulation of the iterated prisoner's dilemma game is studied in this work. The game is played in the cellular automata manner, i.e., with local and synchronous interaction. The game is assessed in fair and unfair contests.
2. The roles of cellular and organismal aging in the development of late-onset maladies. (United States)
Carvalhal Marques, Filipa; Volovik, Yuli; Cohen, Ehud
Numerous disorders, including neurodegenerative diseases and certain types of cancer, manifest late in life. This common feature raises the prospect that an aging-associated decline in the activity of cellular and organismal maintenance mechanisms enables the emergence of these maladies in late life stages. Accordingly, the alteration of aging bears the promise of harnessing the mechanisms that protect the young organism to prevent illness in the elderly. The identification of aging-regulatory pathways has enabled scrutiny of this hypothesis and revealed that the alteration of aging protects invertebrates and mammals from toxic protein aggregation linked to neurodegeneration and from cancer. Here we review the current knowledge on the regulation of aging at the cellular and organismal levels, delineate the mechanistic links between aging and late-onset disorders, describe efforts to develop compounds that protect from these maladies by selectively manipulating aging, and discuss future research directions and possible therapeutic implications of this approach.
3. Recursive definition of global cellular-automata mappings
DEFF Research Database (Denmark)
Feldberg, Rasmus; Knudsen, Carsten; Rasmussen, Steen
A method for a recursive definition of global cellular-automata mappings is presented. The method is based on a graphical representation of global cellular-automata mappings. For a given cellular-automaton rule the recursive algorithm defines the change of the global cellular-automaton mapping as...
4. Holistic design and implementation of pressure actuated cellular structures (United States)
Gramüller, B.; Köke, H.; Hühne, C.
Providing the possibility to develop energy-efficient, lightweight adaptive components, pressure-actuated cellular structures (PACS) are primarily conceived for aeronautics applications. The realization of shape-variable flaps and even airfoils provides the potential to safe weight, increase aerodynamic efficiency and enhance agility. The herein presented holistic design process points out and describes the necessary steps for designing a real-life PACS structure, from the computation of truss geometry to the manufacturing and assembly. The already published methods for the form finding of PACS are adjusted and extended for the exemplary application of a variable-camber wing. The transfer of the form-finding truss model to a cross-sectional design is discussed. The end cap and sealing concept is described together with the implementation of the integral fluid flow. Conceptual limitations due to the manufacturing and assembly processes are discussed. The method’s efficiency is evaluated by finite element method. In order to verify the underlying methods and summarize the presented work a modular real-life demonstrator is experimentally characterized and validates the numerical investigations.
5. Quantitative proteomics reveals cellular targets of celastrol.
Directory of Open Access Journals (Sweden)
Jakob Hansen
6. Cellular circadian clocks in mood disorders. (United States)
McCarthy, Michael J; Welsh, David K
Bipolar disorder (BD) and major depressive disorder (MDD) are heritable neuropsychiatric disorders associated with disrupted circadian rhythms. The hypothesis that circadian clock dysfunction plays a causal role in these disorders has endured for decades but has been difficult to test and remains controversial. In the meantime, the discovery of clock genes and cellular clocks has revolutionized our understanding of circadian timing. Cellular circadian clocks are located in the suprachiasmatic nucleus (SCN), the brain's primary circadian pacemaker, but also throughout the brain and peripheral tissues. In BD and MDD patients, defects have been found in SCN-dependent rhythms of body temperature and melatonin release. However, these are imperfect and indirect indicators of SCN function. Moreover, the SCN may not be particularly relevant to mood regulation, whereas the lateral habenula, ventral tegmentum, and hippocampus, which also contain cellular clocks, have established roles in this regard. Dysfunction in these non-SCN clocks could contribute directly to the pathophysiology of BD/MDD. We hypothesize that circadian clock dysfunction in non-SCN clocks is a trait marker of mood disorders, encoded by pathological genetic variants. Because network features of the SCN render it uniquely resistant to perturbation, previous studies of SCN outputs in mood disorders patients may have failed to detect genetic defects affecting non-SCN clocks, which include not only mood-regulating neurons in the brain but also peripheral cells accessible in human subjects. Therefore, reporters of rhythmic clock gene expression in cells from patients or mouse models could provide a direct assay of the molecular gears of the clock, in cellular clocks that are likely to be more representative than the SCN of mood-regulating neurons in patients. This approach, informed by the new insights and tools of modern chronobiology, will allow a more definitive test of the role of cellular circadian clocks
7. Caenorhabditis elegans maintains highly compartmentalized cellular distribution of metals and steep concentration gradients of manganese.
Directory of Open Access Journals (Sweden)
Gawain McColl
Full Text Available Bioinorganic chemistry is critical to cellular function. Homeostasis of manganese (Mn, for example, is essential for life. A lack of methods for direct in situ visualization of Mn and other biological metals within intact multicellular eukaryotes limits our understanding of management of these metals. We provide the first quantitative subcellular visualization of endogenous Mn concentrations (spanning two orders of magnitude associated with individual cells of the nematode, Caenorhabditis elegans.
8. Chance of Necessity: Modeling Origins of Life (United States)
Pohorille, Andrew
The fundamental nature of processes that led to the emergence of life has been a subject of long-standing debate. One view holds that the origin of life is an event governed by chance, and the result of so many random events is unpredictable. This view was eloquently expressed by Jacques Monod in his book Chance or Necessity. In an alternative view, the origin of life is considered a deterministic event. Its details need not be deterministic in every respect, but the overall behavior is predictable. A corollary to the deterministic view is that the emergence of life must have been determined primarily by universal chemistry and biochemistry rather than by subtle details of environmental conditions. In my lecture I will explore two different paradigms for the emergence of life and discuss their implications for predictability and universality of life-forming processes. The dominant approach is that the origin of life was guided by information stored in nucleic acids (the RNA World hypothesis). In this view, selection of improved combinations of nucleic acids obtained through random mutations drove evolution of biological systems from their conception. An alternative hypothesis states that the formation of protocellular metabolism was driven by non-genomic processes. Even though these processes were highly stochastic the outcome was largely deterministic, strongly constrained by laws of chemistry. I will argue that self-replication of macromolecules was not required at the early stages of evolution; the reproduction of cellular functions alone was sufficient for self-maintenance of protocells. In fact, the precise transfer of information between successive generations of the earliest protocells was unnecessary and could have impeded the discovery of cellular metabolism. I will also show that such concepts as speciation and fitness to the environment, developed in the context of genomic evolution also hold in the absence of a genome.
9. Life is pretty meaningful. (United States)
Heintzelman, Samantha J; King, Laura A
The human experience of meaning in life is widely viewed as a cornerstone of well-being and a central human motivation. Self-reports of meaning in life relate to a host of important functional outcomes. Psychologists have portrayed meaning in life as simultaneously chronically lacking in human life as well as playing an important role in survival. Examining the growing literature on meaning in life, we address the question "How meaningful is life, in general?" We review possible answers from various psychological sources, some of which anticipate that meaning in life should be low and others that it should be high. Summaries of epidemiological data and research using two self-report measures of meaning in life suggest that life is pretty meaningful. Diverse samples rate themselves significantly above the midpoint on self-reports of meaning in life. We suggest that if meaning in life plays a role in adaptation, it must be commonplace, as our analysis suggests.
10. Origins of life systems chemistry (United States)
Sutherland, J.
By reconciling previously conflicting views about the origin of life - in which one or other cellular subsystem emerges first, and then 'invents' the others - a new modus operandi for its study is suggested. Guided by this, a cyanosulfidic protometabolism is uncovered which uses UV light and the stoichiometric reducing power of hydrogen sulfide to convert hydrogen cyanide, and a couple of other prebiotic feedstock molecules which can be derived therefrom, into nucleic acid, peptide and lipid building blocks. Copper plays several key roles in this chemistry, thus, for example, copper(I) catalysed cross coupling and copper(II) driven oxidative crosscoupling reactions generate key feedstock molecules. Geochemical scenarios consistent with this protometabolism are outlined. Finally, the transition of a system from the inanimate to the animate state is considered in the context of there being intermediate stages of partial 'aliveness'.
11. Pattern formation in mutation of "Game of Life"
Institute of Scientific and Technical Information of China (English)
HUANG Wen-gao; PAN Zhi-geng
This paper presents pattern formation in generalized cellular automata (GCA) by varying parameters of classic “game Experiments show the emergence of the self-organizing patterns that is analogous with life forms at the edge of chaos, which consist of certain nontrivial structure and go through periods of growth, maturity and death. We describe these experiments and discuss their potential as alternative way for creating artificial life and generative art, and as a new method for pattern genesis.
12. Quantum features of natural cellular automata (United States)
Elze, Hans-Thomas
Cellular automata can show well known features of quantum mechanics, such as a linear rule according to which they evolve and which resembles a discretized version of the Schrödinger equation. This includes corresponding conservation laws. The class of “natural” Hamiltonian cellular automata is based exclusively on integer-valued variables and couplings and their dynamics derives from an Action Principle. They can be mapped reversibly to continuum models by applying Sampling Theory. Thus, “deformed” quantum mechanical models with a finite discreteness scale l are obtained, which for l → 0 reproduce familiar continuum results. We have recently demonstrated that such automata can form “multipartite” systems consistently with the tensor product structures of nonrelativistic many-body quantum mechanics, while interacting and maintaining the linear evolution. Consequently, the Superposition Principle fully applies for such primitive discrete deterministic automata and their composites and can produce the essential quantum effects of interference and entanglement.
13. Molecular kinesis in cellular function and plasticity. (United States)
Tiedge, H; Bloom, F E; Richter, D
Intracellular transport and localization of cellular components are essential for the functional organization and plasticity of eukaryotic cells. Although the elucidation of protein transport mechanisms has made impressive progress in recent years, intracellular transport of RNA remains less well understood. The National Academy of Sciences Colloquium on Molecular Kinesis in Cellular Function and Plasticity therefore was devised as an interdisciplinary platform for participants to discuss intracellular molecular transport from a variety of different perspectives. Topics covered at the meeting included RNA metabolism and transport, mechanisms of protein synthesis and localization, the formation of complex interactive protein ensembles, and the relevance of such mechanisms for activity-dependent regulation and synaptic plasticity in neurons. It was the overall objective of the colloquium to generate momentum and cohesion for the emerging research field of molecular kinesis.
14. Designing beauty the art of cellular automata
CERN Document Server
Martínez, Genaro
15. A cellular glass substrate solar concentrator (United States)
Bedard, R.; Bell, D.
The design of a second generation point focusing solar concentration is discussed. The design is based on reflective gores fabricated of thin glass mirror bonded continuously to a contoured substrate of cellular glass. The concentrator aperture and structural stiffness was optimized for minimum concentrator cost given the performance requirement of delivering 56 kWth to a 22 cm diameter receiver aperture with a direct normal insolation of 845 watts sq m and an operating wind of 50 kmph. The reflective panel, support structure, drives, foundation and instrumentation and control subsystem designs, optimized for minimum cost, are summarized. The use of cellular glass as a reflective panel substrate material is shown to offer significant weight and cost advantages compared to existing technology materials.
16. Cellular senescence and the aging brain. (United States)
Chinta, Shankar J; Woods, Georgia; Rane, Anand; Demaria, Marco; Campisi, Judith; Andersen, Julie K
Cellular senescence is a potent anti-cancer mechanism that arrests the proliferation of mitotically competent cells to prevent malignant transformation. Senescent cells accumulate with age in a variety of human and mouse tissues where they express a complex 'senescence-associated secretory phenotype' (SASP). The SASP includes many pro-inflammatory cytokines, chemokines, growth factors and proteases that have the potential to cause or exacerbate age-related pathology, both degenerative and hyperplastic. While cellular senescence in peripheral tissues has recently been linked to a number of age-related pathologies, its involvement in brain aging is just beginning to be explored. Recent data generated by several laboratories suggest that both aging and age-related neurodegenerative diseases are accompanied by an increase in SASP-expressing senescent cells of non-neuronal origin in the brain. Moreover, this increase correlates with neurodegeneration. Senescent cells in the brain could therefore constitute novel therapeutic targets for treating age-related neuropathologies.
17. Cellular and Molecular Basis of Cerebellar Development
Directory of Open Access Journals (Sweden)
Salvador eMartinez
Full Text Available Historically, the molecular and cellular mechanisms of cerebellar development were investigated through structural descriptions and studying spontaneous mutations in animal models and humans. Advances in experimental embryology, genetic engineering and neuroimaging techniques render today the possibility to approach the analysis of molecular mechanisms underlying histogenesis and morphogenesis of the cerebellum by experimental designs. Several genes and molecules were identified to be involved in the cerebellar plate regionalization, specification and differentiation of cerebellar neurons, as well as the establishment of cellular migratory routes and the subsequent neuronal connectivity. Indeed, pattern formation of the cerebellum requires the adequate orchestration of both key morphogenetic signals, arising from distinct brain regions, and local expression of specific transcription factors. Thus, the present review wants to revisit and discuss these morphogenetic and molecular mechanisms taking place during cerebellar development in order to understand causal processes regulating cerebellar cytoarchitecture, its highly topographically ordered circuitry and its role in brain function.
18. Cellular automata in image processing and geometry
CERN Document Server
Adamatzky, Andrew; Sun, Xianfang
The book presents findings, views and ideas on what exact problems of image processing, pattern recognition and generation can be efficiently solved by cellular automata architectures. This volume provides a convenient collection in this area, in which publications are otherwise widely scattered throughout the literature. The topics covered include image compression and resizing; skeletonization, erosion and dilation; convex hull computation, edge detection and segmentation; forgery detection and content based retrieval; and pattern generation. The book advances the theory of image processing, pattern recognition and generation as well as the design of efficient algorithms and hardware for parallel image processing and analysis. It is aimed at computer scientists, software programmers, electronic engineers, mathematicians and physicists, and at everyone who studies or develops cellular automaton algorithms and tools for image processing and analysis, or develops novel architectures and implementations of mass...
19. Prodrug Approach for Increasing Cellular Glutathione Levels
Directory of Open Access Journals (Sweden)
Ivana Cacciatore
Full Text Available Reduced glutathione (GSH is the most abundant non-protein thiol in mammalian cells and the preferred substrate for several enzymes in xenobiotic metabolism and antioxidant defense. It plays an important role in many cellular processes, such as cell differentiation, proliferation and apoptosis. GSH deficiency has been observed in aging and in a wide range of pathologies, including neurodegenerative disorders and cystic fibrosis (CF, as well as in several viral infections. Use of GSH as a therapeutic agent is limited because of its unfavorable biochemical and pharmacokinetic properties. Several reports have provided evidence for the use of GSH prodrugs able to replenish intracellular GSH levels. This review discusses different strategies for increasing GSH levels by supplying reversible bioconjugates able to cross the cellular membrane more easily than GSH and to provide a source of thiols for GSH synthesis.
20. Mobile node localization in cellular networks
CERN Document Server
Malik, Yasir; Abdulrazak, Bessam; Tariq, Usman; 10.5121/ijwmn.2011.3607
Location information is the major component in location based applications. This information is used in different safety and service oriented applications to provide users with services according to their Geolocation. There are many approaches to locate mobile nodes in indoor and outdoor environments. In this paper, we are interested in outdoor localization particularly in cellular networks of mobile nodes and presented a localization method based on cell and user location information. Our localization method is based on hello message delay (sending and receiving time) and coordinate information of Base Transceiver Station (BTSs). To validate our method across cellular network, we implemented and simulated our method in two scenarios i.e. maintaining database of base stations in centralize and distributed system. Simulation results show the effectiveness of our approach and its implementation applicability in telecommunication systems.
1. Mobile Node Localization in Cellular Networks
Directory of Open Access Journals (Sweden)
Yasir Malik
Full Text Available Location information is the major component in location based applications. This information is used in different safety and service oriented applications to provide users with services according to their Geolocation. There are many approaches to locate mobile nodes in indoor and outdoor environments. In thispaper, we are interested in outdoor localization particularly in cellular networks of mobile nodes andpresented a localization method based on cell and user location information. Our localization method is based on hello message delay (sending and receiving time and coordinate information of Base Transceiver Station (BTSs. To validate our method across cellular network, we implemented and simulated our method in two scenarios i.e. maintaining database of base stations in centralize and distributed system. Simulation results show the effectiveness of our approach and its implementation applicability in telecommunication systems.
2. A Modified Sensitive Driving Cellular Automaton Model
Institute of Scientific and Technical Information of China (English)
GE Hong-Xia; DAI Shi-Qiang; DONG Li-Yun; LEI Li
A modified cellular automaton model for traffic flow on highway is proposed with a novel concept about the variable security gap. The concept is first introduced into the original Nagel-Schreckenberg model, which is called the non-sensitive driving cellular automaton model. And then it is incorporated with a sensitive driving NaSch model,in which the randomization brake is arranged before the deterministic deceleration. A parameter related to the variable security gap is determined through simulation. Comparison of the simulation results indicates that the variable security gap has different influence on the two models. The fundamental diagram obtained by simulation with the modified sensitive driving NaSch model shows that the maximumflow are in good agreement with the observed data, indicating that the presented model is more reasonable and realistic.
3. Quantum features of natural cellular automata
CERN Document Server
Elze, Hans-Thomas
Cellular automata can show well known features of quantum mechanics, such as a linear rule according to which they evolve and which resembles a discretized version of the Schroedinger equation. This includes corresponding conservation laws. The class of "natural" Hamiltonian cellular automata is based exclusively on integer-valued variables and couplings and their dynamics derives from an Action Principle. They can be mapped reversibly to continuum models by applying Sampling Theory. Thus, "deformed" quantum mechanical models with a finite discreteness scale $l$ are obtained, which for $l\\rightarrow 0$ reproduce familiar continuum results. We have recently demonstrated that such automata can form "multipartite" systems consistently with the tensor product structures of nonrelativistic many-body quantum mechanics, while interacting and maintaining the linear evolution. Consequently, the Superposition Principle fully applies for such primitive discrete deterministic automata and their composites and can produce...
4. WD40 proteins propel cellular networks. (United States)
Stirnimann, Christian U; Petsalaki, Evangelia; Russell, Robert B; Müller, Christoph W
Recent findings indicate that WD40 domains play central roles in biological processes by acting as hubs in cellular networks; however, they have been studied less intensely than other common domains, such as the kinase, PDZ or SH3 domains. As suggested by various interactome studies, they are among the most promiscuous interactors. Structural studies suggest that this property stems from their ability, as scaffolds, to interact with diverse proteins, peptides or nucleic acids using multiple surfaces or modes of interaction. A general scaffolding role is supported by the fact that no WD40 domain has been found with intrinsic enzymatic activity despite often being part of large molecular machines. We discuss the WD40 domain distributions in protein networks and structures of WD40-containing assemblies to demonstrate their versatility in mediating critical cellular functions.
5. Cellular Dynamics Revealed by Digital Holographic Microscopy☆
KAUST Repository
Marquet, P.
Digital holographic microscopy (DHM) is a new optical method that provides, without the use of any contrast agent, real-time, three-dimensional images of transparent living cells, with an axial sensitivity of a few tens of nanometers. They result from the hologram numerical reconstruction process, which permits a sub wavelength calculation of the phase shift, produced on the transmitted wave front, by the optically probed cells, namely the quantitative phase signal (QPS). Specifically, in addition to measurements of cellular surface morphometry and intracellular refractive index (RI), various biophysical cellular parameters including dry mass, absolute volume, membrane fluctuations at the nanoscale and biomechanical properties, transmembrane water permeability as swell as current, can be derived from the QPS. This article presents how quantitative phase DHM (QP-DHM) can explored cell dynamics at the nanoscale with a special attention to both the study of neuronal dynamics and the optical resolution of local neuronal network.
Energy Technology Data Exchange (ETDEWEB)
Cellular automata provide a fascinating class of dynamical systems based on very simple rules of evolution yet capable of displaying highly complex behavior. These include simplified models for many phenomena seen in nature. Among other things, they provide insight into self-organized criticality, wherein dissipative systems naturally drive themselves to a critical state with important phenomena occurring over a wide range of length and the scales. This article begins with an overview of self-organized criticality. This is followed by a discussion of a few examples of simple cellular automaton systems, some of which may exhibit critical behavior. Finally, some of the fascinating exact mathematical properties of the Bak-Tang-Wiesenfeld sand-pile model [1] are discussed.
7. Cellular responses to environmental DNA damage
Energy Technology Data Exchange (ETDEWEB)
This volume contains the proceedings of the conference entitled Cellular Responses to Environmental DNA Damage held in Banff,Alberta December 1--6, 1991. The conference addresses various aspects of DNA repair in sessions titled DNA repair; Basic Mechanisms; Lesions; Systems; Inducible Responses; Mutagenesis; Human Population Response Heterogeneity; Intragenomic DNA Repair Heterogeneity; DNA Repair Gene Cloning; Aging; Human Genetic Disease; and Carcinogenesis. Individual papers are represented as abstracts of about one page in length.
8. Leiomyoma cellulare in postoperative material: clinical cases
Introduction: Leiomyoma in one of the most common benign endometrial cancers. Location of the myoma in the cervix and the area of the broad ligament of the uterus is rare. Leiomyoma cellulare (LC) occurs in about 5.0% of leiomyoma cases. Aim of the research: To determine the occurrence of LC among 294 cases of myomas as well as myomas and uterine endometriosis, found in postoperative examinations. Material and methods: Patients were qualified for the surgery based on a gynaecolog...
9. Imaging cellular and molecular biological functions
Energy Technology Data Exchange (ETDEWEB)
Shorte, S.L. [Institut Pasteur, 75 - Paris (France). Plateforme d' Imagerie Dynamique PFID-Imagopole; Frischknecht, F. (eds.) [Heidelberg Univ. Medical School (Germany). Dept. of Parasitology
10. Cognitive resource management for heterogeneous cellular networks
CERN Document Server
Liu, Yongkang
This Springer Brief focuses on cognitive resource management in heterogeneous cellular networks (Het Net) with small cell deployment for the LTE-Advanced system. It introduces the Het Net features, presents practical approaches using cognitive radio technology in accommodating small cell data relay and optimizing resource allocation and examines the effectiveness of resource management among small cells given limited coordination bandwidth and wireless channel uncertainty. The authors introduce different network characteristics of small cell, investigate the mesh of small cell access points in
11. Cellular immune findings in Lyme disease. (United States)
Sigal, L. H.; Moffat, C. M.; Steere, A. C.; Dwyer, J. M.
From 1981 through 1983, we did the first testing of cellular immunity in Lyme disease. Active established Lyme disease was often associated with lymphopenia, less spontaneous suppressor cell activity than normal, and a heightened response of lymphocytes to phytohemagglutinin and Lyme spirochetal antigens. Thus, a major feature of the immune response during active disease seems to be a lessening of suppression, but it is not yet known whether this response plays a role in the pathophysiology of the disease. PMID:6240164
12. Light weight cellular structures based on aluminium
Energy Technology Data Exchange (ETDEWEB)
Prakash, O. [Indian Inst. of Tech., Kanpur (India); Embury, J.D.; Sinclair, C. [McMaster Univ., Hamilton, ON (Canada); Sang, H. [Queen`s Univ., Kingston, ON (Canada); Silvetti, P. [Cordoba Univ. Nacional (Argentina). Facultad de Ciencias Exactas, Fisicas y Naturales
An interesting form of lightweight material which has emerged in the past 2 decades is metallic foam. This paper deals with the basic concepts of making metallic foams and a detailed study of foams produced from Al-SiC. In addition, some aspects of cellular solids based on honeycomb structures are outlined including the concept of producing both two-phase foams and foams with composite walls.
13. Cellularity of certain quantum endomorphism algebras
DEFF Research Database (Denmark)
Andersen, Henning Haahr; Lehrer, Gus; Zhang, Ruibin
structure are described in terms of certain Temperley–Lieb-like diagrams. We also prove general results that relate endomorphism algebras of specialisations to specialisations of the endomorphism algebras. When ζ is a root of unity of order bigger than d we consider the Uζ-module structure...... we independently recover the weight multiplicities of indecomposable tilting modules for Uζ(sl2) from the decomposition numbers of the endomorphism algebras, which are known through cellular theory....
14. Empirical multiscale networks of cellular regulation.
Directory of Open Access Journals (Sweden)
Benjamin de Bivort
Full Text Available Grouping genes by similarity of expression across multiple cellular conditions enables the identification of cellular modules. The known functions of genes enable the characterization of the aggregate biological functions of these modules. In this paper, we use a high-throughput approach to identify the effective mutual regulatory interactions between modules composed of mouse genes from the Alliance for Cell Signaling (AfCS murine B-lymphocyte database which tracks the response of approximately 15,000 genes following chemokine perturbation. This analysis reveals principles of cellular organization that we discuss along four conceptual axes. (1 Regulatory implications: the derived collection of influences between any two modules quantifies intuitive as well as unexpected regulatory interactions. (2 Behavior across scales: trends across global networks of varying resolution (composed of various numbers of modules reveal principles of assembly of high-level behaviors from smaller components. (3 Temporal behavior: tracking the mutual module influences over different time intervals provides features of regulation dynamics such as duration, persistence, and periodicity. (4 Gene Ontology correspondence: the association of modules to known biological roles of individual genes describes the organization of functions within coexpressed modules of various sizes. We present key specific results in each of these four areas, as well as derive general principles of cellular organization. At the coarsest scale, the entire transcriptional network contains five divisions: two divisions devoted to ATP production/biosynthesis and DNA replication that activate all other divisions, an "extracellular interaction" division that represses all other divisions, and two divisions (proliferation/differentiation and membrane infrastructure that activate and repress other divisions in specific ways consistent with cell cycle control.
15. pna - assisted cellular migration on patterned surfaces
ABSTRACT - The ability to control the cellular microenvironment, such as cell-substrate and cell-cell interactions at the micro- and nanoscale, is important for advances in several fields such as medicine and immunology, biochemistry, biomaterials, and tissue engineering. In order to undergo fundamental biological processes, most mammalian cells must adhere to the underlying extracellular matrix (ECM), eliciting cell adhesion and migration processes that are critical to embryogenesis, angioge...
16. Introduction to Tissular and Cellular Engineering
Institute of Scientific and Technical Information of China (English)
Most human tissues do not regenerate spontaneously, which is why cellular therapies and tissular engineering are promising alternatives. The principle is simple: cells are sampled in a patient and introduced in the damaged tissue or in a tridimentional porous support and cultivated in a bioreactor in which the physico-chemical and mechanical parameters are controlled. Once the tissues (or the cells) are mature they may be implanted. In parallel, the development of biotherapies with stem cells is a field of ...
17. Cellular Kinetics of Perivascular MSC Precursors
Directory of Open Access Journals (Sweden)
William C. W. Chen
18. Cellular arsenic transport pathways in mammals. (United States)
Roggenbeck, Barbara A; Banerjee, Mayukh; Leslie, Elaine M
Natural contamination of drinking water with arsenic results in the exposure of millions of people world-wide to unacceptable levels of this metalloid. This is a serious global health problem because arsenic is a Group 1 (proven) human carcinogen and chronic exposure is known to cause skin, lung, and bladder tumors. Furthermore, arsenic exposure can result in a myriad of other adverse health effects including diseases of the cardiovascular, respiratory, neurological, reproductive, and endocrine systems. In addition to chronic environmental exposure to arsenic, arsenic trioxide is approved for the clinical treatment of acute promyelocytic leukemia, and is in clinical trials for other hematological malignancies as well as solid tumors. Considerable inter-individual variability in susceptibility to arsenic-induced disease and toxicity exists, and the reasons for such differences are incompletely understood. Transport pathways that influence the cellular uptake and export of arsenic contribute to regulating its cellular, tissue, and ultimately body levels. In the current review, membrane proteins (including phosphate transporters, aquaglyceroporin channels, solute carrier proteins, and ATP-binding cassette transporters) shown experimentally to contribute to the passage of inorganic, methylated, and/or glutathionylated arsenic species across cellular membranes are discussed. Furthermore, what is known about arsenic transporters in organs involved in absorption, distribution, and metabolism and how transport pathways contribute to arsenic elimination are described.
19. A Real Space Cellular Automaton Laboratory (United States)
Rozier, O.; Narteau, C.
Investigations in geomorphology may benefit from computer modelling approaches that rely entirely on self-organization principles. In the vast majority of numerical models, instead, points in space are characterised by a variety of physical variables (e.g. sediment transport rate, velocity, temperature) recalculated over time according to some predetermined set of laws. However, there is not always a satisfactory theoretical framework from which we can quantify the overall dynamics of the system. For these reasons, we prefer to concentrate on interaction patterns using a basic cellular automaton modelling framework, the Real Space Cellular Automaton Laboratory (ReSCAL), a powerful and versatile generator of 3D stochastic models. The objective of this software suite released under a GNU license is to develop interdisciplinary research collaboration to investigate the dynamics of complex systems. The models in ReSCAL are essentially constructed from a small number of discrete states distributed on a cellular grid. An elementary cell is a real-space representation of the physical environment and pairs of nearest neighbour cells are called doublets. Each individual physical process is associated with a set of doublet transitions and characteristic transition rates. Using a modular approach, we can simulate and combine a wide range of physical, chemical and/or anthropological processes. Here, we present different ingredients of ReSCAL leading to applications in geomorphology: dune morphodynamics and landscape evolution. We also discuss how ReSCAL can be applied and developed across many disciplines in natural and human sciences.
20. [Cellular and molecular mechanisms of memory]. (United States)
Laroche, Serge
A defining characteristic of the brain is its remarkable capacity to undergo activity-dependent functional and morphological remodelling via mechanisms of plasticity that form the basis of our capacity to encode and retain memories. Today, it is generally accepted that one key neurobiological mechanism underlying the formation of memories reside in activity-driven modifications of synaptic strength and structural remodelling of neural networks activated during learning. The discovery and detailed report of the phenomenon generally known as long-term potentiation, a long-lasting activity-dependent form of synaptic strengthening, opened a new chapter in the study of the neurobiological substrate of memory in the vertebrate brain, and this form of synaptic plasticity has now become the dominant model in the search for the cellular bases of learning and memory. To date, the key events in the cellular and molecular mechanisms underlying synaptic plasticity and memory formation are starting to be identified. They require the activation of specific receptors and of several molecular cascades to convert extracellular signals into persistent functional changes in neuronal connectivity. Accumulating evidence suggests that the rapid activation of neuronal gene programs is a key mechanism underlying the enduring modification of neural networks required for the laying down of memory. The recent developments in the search for the cellular and molecular mechanisms of memory storage are reviewed.
1. Coordination of autophagy with other cellular activities
Institute of Scientific and Technical Information of China (English)
Yan WANG; Zheng-hong QIN
The cell biological phenomenon of autophagy has attracted increasing attention in recent years,partly as a consequence of the discovery of key components of its cellular machinery.Autophagy plays a crucial role in a myriad of cellular functions.Autophagy has its own regulatory mechanisms,but this process is not isolated.Autophagy is coordinated with other cellular activities to maintain cell homeostasis.Autophagy is critical for a range of human physiological processes.The multifunctional roles of autophagy are explained by its ability to interact with several key components of various cell pathways.In this review,we focus on the coordination between autophagy and other physiological processes,including the ubiquitin-proteasome system (UPS),energy homeostasis,aging,programmed cell death,the immune responses,microbial invasion and inflammation.The insights gained from investigating autophagic networks should increase our understanding of their roles in human diseases and their potential as targets for therapeutic intervention.
2. Dynamic Channel Allocation in Sectored Cellular Systems
Institute of Scientific and Technical Information of China (English)
It is known that dynamic channel assignment(DCA) strategy outperforms the fixed channel assignment(FCA) strategy in omni-directional antenna cellular systems. One of the most important methods used in DCA was channel borrowing. But with the emergence of cell sectorization and spatial division multiple access(SDMA) which are used to increase the capacity of cellular systems, the channel assignment faces a series of new problems. In this paper, a dynamic channel allocation scheme based on sectored cellular systems is proposed. By introducing intra-cell channel borrowing (borrowing channels from neighboring sectors) and inter-cell channel borrowing (borrowing channels from neighboring cells) methods, previous DCA strategies, including compact pattern based channel borrowing(CPCB) and greedy based dynamic channel assignment(GDCA) schemes proposed by the author, are improved significantly. The computer simulation shows that either intra-cell borrowing scheme or inter-cell borrowing scheme is efficient enough to uniform and non-uniform traffic service distributions.
3. HDACi: cellular effects, opportunities for restorative dentistry.
LENUS (Irish Health Repository)
Duncan, H F
Acetylation of histone and non-histone proteins alters gene expression and induces a host of cellular effects. The acetylation process is homeostatically balanced by two groups of cellular enzymes, histone acetyltransferases (HATs) and histone deacetylases (HDACs). HAT activity relaxes the structure of the human chromatin, rendering it transcriptionally active, thereby increasing gene expression. In contrast, HDAC activity leads to gene silencing. The enzymatic balance can be \\'tipped\\' by histone deacetylase inhibitors (HDACi), leading to an accumulation of acetylated proteins, which subsequently modify cellular processes including stem cell differentiation, cell cycle, apoptosis, gene expression, and angiogenesis. There is a variety of natural and synthetic HDACi available, and their pleiotropic effects have contributed to diverse clinical applications, not only in cancer but also in non-cancer areas, such as chronic inflammatory disease, bone engineering, and neurodegenerative disease. Indeed, it appears that HDACi-modulated effects may differ between \\'normal\\' and transformed cells, particularly with regard to reactive oxygen species accumulation, apoptosis, proliferation, and cell cycle arrest. The potential beneficial effects of HDACi for health, resulting from their ability to regulate global gene expression by epigenetic modification of DNA-associated proteins, also offer potential for application within restorative dentistry, where they may promote dental tissue regeneration following pulpal damage.
4. The Origin of Life (United States)
Hodson, D.
Presents an outline of lectures given on this topic to British secondary students. Man's various ideas about the origin of life are included in three categories: those that consider life to have been created by a Divine Being; those that consider life to have developed from non-living matter; and those that consider life to be eternal. (MLH)
5. Proteoglycans act as cellular hepatitis delta virus attachment receptors. (United States)
Lamas Longarela, Oscar; Schmidt, Tobias T; Schöneweis, Katrin; Romeo, Raffaella; Wedemeyer, Heiner; Urban, Stephan; Schulze, Andreas
The hepatitis delta virus (HDV) is a small, defective RNA virus that requires the presence of the hepatitis B virus (HBV) for its life cycle. Worldwide more than 15 million people are co-infected with HBV and HDV. Although much effort has been made, the early steps of the HBV/HDV entry process, including hepatocyte attachment and receptor interaction are still not fully understood. Numerous possible cellular HBV/HDV binding partners have been described over the last years; however, so far only heparan sulfate proteoglycans have been functionally confirmed as cell-associated HBV attachment factors. Recently, it has been suggested that ionotrophic purinergic receptors (P2XR) participate as receptors in HBV/HDV entry. Using the HBV/HDV susceptible HepaRG cell line and primary human hepatocytes (PHH), we here demonstrate that HDV entry into hepatocytes depends on the interaction with the glycosaminoglycan (GAG) side chains of cellular heparan sulfate proteoglycans. We furthermore provide evidence that P2XR are not involved in HBV/HDV entry and that effects observed with inhibitors for these receptors are a consequence of their negative charge. HDV infection was abrogated by soluble GAGs and other highly sulfated compounds. Enzymatic removal of defined carbohydrate structures from the cell surface using heparinase III or the obstruction of GAG synthesis by sodium chlorate inhibited HDV infection of HepaRG cells. Highly sulfated P2XR antagonists blocked HBV/HDV infection of HepaRG cells and PHH. In contrast, no effect on HBV/HDV infection was found when uncharged P2XR antagonists or agonists were applied. In summary, HDV infection, comparable to HBV infection, requires binding to the carbohydrate side chains of hepatocyte-associated heparan sulfate proteoglycans as attachment receptors, while P2XR are not actively involved.
6. [Division of regulatory cellular systems (Lvov)]. (United States)
Kusen', S I
Two departments of the A. V. Palladin Institute of Biochemistry of the National Academy of Sciences of Ukraine were founded in 1969 in Lviv. These were: the Department of Biochemistry of Cell Differentiation headed by Professor S. I. Kusen and Department of Regulation of Cellular Synthesis of Low Molecular Weight Compounds headed by Professor G. M. Shavlovsky. The Lviv Division of the A. V. Palladin Institute of Biochemistry of the National Academy of Sciences of Ukraine with Professor S. I. Kusen as its chief, was founded in 1974 on the basis of these departments and the Laboratory of Modelling of Regulatory Cellular Systems headed by Professor M. P. Derkach. The above mentioned laboratory which was not the structural unit obtained the status of Structural Laboratory of Cellular Biophysics in 1982 and was headed by O. A. Goida, Candidate of biological sciences. From 1983 the Laboratory of Correcting Therapy of Malignant Tumors and Hemoblastoses at the Institute of Molecular Biology and Genetics, Academy of Sciences of Ukraine (Chief--S. V. Ivasivka, Candidate of medical sciences) was included in the structure of the Division. That Laboratory was soon transformed into the Department of Carbohydrate Metabolism Regulation headed by Professor I. D. Holovatsky. In 1988 this Department was renamed into the Department of Glycoprotein Biochemistry and headed by M. D. Lutsik, Doctor of biological sciences. In 1982 one more Laboratory of Biochemical Genetics was founded at the Department of Regulation of Cellular Synthesis of Low Molecular Weight Compounds, in 1988 it was transformed into the Department of Biochemical Genetics (Chief--Professor A. A. Sibirny). In 1989 the Laboratory of Anion Transport was taken from A. V. Palladin Institute of Biochemistry, Academy of Sciences of Ukraine to Lviv Division of this Institute. This laboratory was headed by Professor M. M. Veliky. One more reorganization in the Division structure took place in 1994. The Department of
7. Cellular membrane trafficking of mesoporous silica nanoparticles
Energy Technology Data Exchange (ETDEWEB)
Fang, I-Ju [Iowa State Univ., Ames, IA (United States)
This dissertation mainly focuses on the investigation of the cellular membrane trafficking of mesoporous silica nanoparticles. We are interested in the study of endocytosis and exocytosis behaviors of mesoporous silica nanoparticles with desired surface functionality. The relationship between mesoporous silica nanoparticles and membrane trafficking of cells, either cancerous cells or normal cells was examined. Since mesoporous silica nanoparticles were applied in many drug delivery cases, the endocytotic efficiency of mesoporous silica nanoparticles needs to be investigated in more details in order to design the cellular drug delivery system in the controlled way. It is well known that cells can engulf some molecules outside of the cells through a receptor-ligand associated endocytosis. We are interested to determine if those biomolecules binding to cell surface receptors can be utilized on mesoporous silica nanoparticle materials to improve the uptake efficiency or govern the mechanism of endocytosis of mesoporous silica nanoparticles. Arginine-glycine-aspartate (RGD) is a small peptide recognized by cell integrin receptors and it was reported that avidin internalization was highly promoted by tumor lectin. Both RGD and avidin were linked to the surface of mesoporous silica nanoparticle materials to investigate the effect of receptor-associated biomolecule on cellular endocytosis efficiency. The effect of ligand types, ligand conformation and ligand density were discussed in Chapter 2 and 3. Furthermore, the exocytosis of mesoporous silica nanoparticles is very attractive for biological applications. The cellular protein sequestration study of mesoporous silica nanoparticles was examined for further information of the intracellular pathway of endocytosed mesoporous silica nanoparticle materials. The surface functionality of mesoporous silica nanoparticle materials demonstrated selectivity among the materials and cancer and normal cell lines. We aimed to determine
8. Mapping of cellular iron using hyperspectral fluorescence imaging in a cellular model of Parkinson's disease (United States)
Oh, Eung Seok; Heo, Chaejeong; Kim, Ji Seon; Lee, Young Hee; Kim, Jong Min
Parkinson's disease (PD) is characterized by progressive dopaminergic cell loss in the substantianigra (SN) and elevated iron levels demonstrated by autopsy and with 7-Tesla magnetic resonance imaging. Direct visualization of iron with live imaging techniques has not yet been successful. The aim of this study is to visualize and quantify the distribution of cellular iron using an intrinsic iron hyperspectral fluorescence signal. The 1-methyl-4-phenylpyridinium (MPP+)-induced cellular model of PD was established in SHSY5Y cells. The cells were exposed to iron by treatment with ferric ammonium citrate (FAC, 100 μM) for up to 6 hours. The hyperspectral fluorescence imaging signal of iron was examined usinga high- resolution dark-field optical microscope system with signal absorption for the visible/ near infrared (VNIR) spectral range. The 6-hour group showed heavy cellular iron deposition compared with the small amount of iron accumulation in the 1-hour group. The cellular iron was dispersed in a small, particulate form, whereas extracellular iron was detected in an aggregated form. In addition, iron particles were found to be concentrated on the cell membrane/edge of shrunken cells. The cellular iron accumulation readily occurred in MPP+-induced cells, which is consistent with previous studies demonstrating elevated iron levels in the SN in PD. This direct iron imaging methodology could be applied to analyze the physiological role of iron in PD, and its application might be expanded to various neurological disorders involving other metals, such as copper, manganese or zinc.
9. Cellular Particle Dynamics simulation of biomechanical relaxation processes of multi-cellular systems (United States)
McCune, Matthew; Kosztin, Ioan
Cellular Particle Dynamics (CPD) is a theoretical-computational-experimental framework for describing and predicting the time evolution of biomechanical relaxation processes of multi-cellular systems, such as fusion, sorting and compression. In CPD, cells are modeled as an ensemble of cellular particles (CPs) that interact via short range contact interactions, characterized by an attractive (adhesive interaction) and a repulsive (excluded volume interaction) component. The time evolution of the spatial conformation of the multicellular system is determined by following the trajectories of all CPs through numerical integration of their equations of motion. Here we present CPD simulation results for the fusion of both spherical and cylindrical multi-cellular aggregates. First, we calibrate the relevant CPD model parameters for a given cell type by comparing the CPD simulation results for the fusion of two spherical aggregates to the corresponding experimental results. Next, CPD simulations are used to predict the time evolution of the fusion of cylindrical aggregates. The latter is relevant for the formation of tubular multi-cellular structures (i.e., primitive blood vessels) created by the novel bioprinting technology. Work supported by NSF [PHY-0957914]. Computer time provided by the University of Missouri Bioinformatics Consortium.
10. Shape Memory Alloy-Based Periodic Cellular Structures Project (United States)
National Aeronautics and Space Administration — This SBIR Phase II effort will continue to develop and demonstrate an innovative shape memory alloy (SMA) periodic cellular structural technology. Periodic cellular...
11. Scalable asynchronous execution of cellular automata (United States)
Folino, Gianluigi; Giordano, Andrea; Mastroianni, Carlo
The performance and scalability of cellular automata, when executed on parallel/distributed machines, are limited by the necessity of synchronizing all the nodes at each time step, i.e., a node can execute only after the execution of the previous step at all the other nodes. However, these synchronization requirements can be relaxed: a node can execute one step after synchronizing only with the adjacent nodes. In this fashion, different nodes can execute different time steps. This can be a notable advantageous in many novel and increasingly popular applications of cellular automata, such as smart city applications, simulation of natural phenomena, etc., in which the execution times can be different and variable, due to the heterogeneity of machines and/or data and/or executed functions. Indeed, a longer execution time at a node does not slow down the execution at all the other nodes but only at the neighboring nodes. This is particularly advantageous when the nodes that act as bottlenecks vary during the application execution. The goal of the paper is to analyze the benefits that can be achieved with the described asynchronous implementation of cellular automata, when compared to the classical all-to-all synchronization pattern. The performance and scalability have been evaluated through a Petri net model, as this model is very useful to represent the synchronization barrier among nodes. We examined the usual case in which the territory is partitioned into a number of regions, and the computation associated with a region is assigned to a computing node. We considered both the cases of mono-dimensional and two-dimensional partitioning. The results show that the advantage obtained through the asynchronous execution, when compared to the all-to-all synchronous approach is notable, and it can be as large as 90% in terms of speedup.
12. Early Life Nutrition, Epigenetics and Programming of Later Life Disease
Directory of Open Access Journals (Sweden)
Mark H. Vickers
Full Text Available The global pandemic of obesity and type 2 diabetes is often causally linked to marked changes in diet and lifestyle; namely marked increases in dietary intakes of high energy diets and concomitant reductions in physical activity levels. However, less attention has been paid to the role of developmental plasticity and alterations in phenotypic outcomes resulting from altered environmental conditions during the early life period. Human and experimental animal studies have highlighted the link between alterations in the early life environment and increased risk of obesity and metabolic disorders in later life. This link is conceptualised as the developmental programming hypothesis whereby environmental influences during critical periods of developmental plasticity can elicit lifelong effects on the health and well-being of the offspring. In particular, the nutritional environment in which the fetus or infant develops influences the risk of metabolic disorders in offspring. The late onset of such diseases in response to earlier transient experiences has led to the suggestion that developmental programming may have an epigenetic component, as epigenetic marks such as DNA methylation or histone tail modifications could provide a persistent memory of earlier nutritional states. Moreover, evidence exists, at least from animal models, that such epigenetic programming should be viewed as a transgenerational phenomenon. However, the mechanisms by which early environmental insults can have long-term effects on offspring are relatively unclear. Thus far, these mechanisms include permanent structural changes to the organ caused by suboptimal levels of an important factor during a critical developmental period, changes in gene expression caused by epigenetic modifications (including DNA methylation, histone modification, and microRNA and permanent changes in cellular ageing. A better understanding of the epigenetic basis of developmental programming and how
13. Cellular regulation of the dopamine transporter
DEFF Research Database (Denmark)
Eriksen, Jacob
The dopamine transporter (DAT) mediates reuptake of dopamine from the synaptic cleft and is a target for widely abused psychostimulants such as cocaine and amphetamine. Nonetheless, little is known about the cellular distribution and trafficking of natively expressed DAT. DAT and its trafficking...... in heterologous cells and in cultured DA neurons. DAT has been shown to be regulated by the dopamine D2 receptor (D2R), the primary target foranti-psychotics, through a direct interaction. D2R is among other places expressed as an autoreceptor in DA neurons. Transient over-expression of DAT with D2R in HEK293...
14. Cellular automata models for synchronized traffic flow
CERN Document Server
Jiang Rui
This paper presents a new cellular automata model for describing synchronized traffic flow. The fundamental diagrams, the spacetime plots and the 1 min average data have been analysed in detail. It is shown that the model can describe the outflow from the jams, the light synchronized flow as well as heavy synchronized flow with average speed greater than approximately 24 km h sup - sup 1. As for the synchronized flow with speed lower than 24 km h sup - sup 1 , it is unstable and will evolve into the coexistence of jams, free flow and light synchronized flow. This is consistent with the empirical findings (Kerner B S 1998 Phys. Rev. Lett. 81 3797).
15. Enantioselective cellular uptake of chiral semiconductor nanocrystals (United States)
Martynenko, I. V.; Kuznetsova, V. A.; Litvinov, I. K.; Orlova, A. O.; Maslov, V. G.; Fedorov, A. V.; Dubavik, A.; Purcell-Milton, F.; Gun'ko, Yu K.; Baranov, A. V.
The influence of the chirality of semiconductor nanocrystals, CdSe/ZnS quantum dots (QDs) capped with L- and D-cysteine, on the efficiency of their uptake by living Ehrlich Ascite carcinoma cells is studied by spectral- and time-resolved fluorescence microspectroscopy. We report an evident enantioselective process where cellular uptake of the L-Cys QDs is almost twice as effective as that of the D-Cys QDs. This finding paves the way for the creation of novel approaches to control the biological properties and behavior of nanomaterials in living cells.
16. Cellular trafficking of nicotinic acetylcholine receptors
Institute of Scientific and Technical Information of China (English)
Paul A ST JOHN
Nicotinic acetylcholine receptors (nAChRs) play critical roles throughout the body. Precise regulation of the cellular location and availability of nAChRs on neurons and target cells is critical to their proper function. Dynamic, post-translational regulation of nAChRs, particularly control of their movements among the different compartments of cells, is an important aspect of that regulation. A combination of new information and new techniques has the study of nAChR trafficking poised for new breakthroughs.
17. Cellular and physical mechanisms of branching morphogenesis (United States)
Varner, Victor D.; Nelson, Celeste M.
Branching morphogenesis is the developmental program that builds the ramified epithelial trees of various organs, including the airways of the lung, the collecting ducts of the kidney, and the ducts of the mammary and salivary glands. Even though the final geometries of epithelial trees are distinct, the molecular signaling pathways that control branching morphogenesis appear to be conserved across organs and species. However, despite this molecular homology, recent advances in cell lineage analysis and real-time imaging have uncovered surprising differences in the mechanisms that build these diverse tissues. Here, we review these studies and discuss the cellular and physical mechanisms that can contribute to branching morphogenesis. PMID:25005470
18. Cellular automata modelling of hantarvirus infection
Energy Technology Data Exchange (ETDEWEB)
Abdul Karim, Mohamad Faisal [School of Distance Education, Universiti Sains Malaysia, Minden 11800, Penang (Malaysia)], E-mail:; Md Ismail, Ahmad Izani [School of Mathematical Sciences, Universiti Sains Malaysia, Minden 11800, Penang (Malaysia)], E-mail:; Ching, Hoe Bee [School of Mathematical Sciences, Universiti Sains Malaysia, Minden 11800, Penang (Malaysia)], E-mail:
Hantaviruses are a group of viruses which have been identified as being responsible for the outbreak of diseases such as the hantavirus pulmonary syndrome. In an effort to understand the characteristics and dynamics of hantavirus infection, mathematical models based on differential equations have been developed and widely studied. However, such models neglect the local characteristics of the spreading process and do not include variable susceptibility of individuals. In this paper, we develop an alternative approach based on cellular automata to analyze and study the spatiotemporal patterns of hantavirus infection.
19. Microwave components for cellular portable radiotelephone (United States)
Muraguchi, Masahiro; Aikawa, Masayoshi
Mobile and personal communication systems are expected to represent a huge market for microwave components in the coming years. A number of components in silicon bipolar, silicon Bi-CMOS, GaAs MESFET, HBT and HEMT are now becoming available for system application. There are tradeoffs among the competing technologies with regard to performance, cost, reliability and time-to-market. This paper describes process selection and requirements of cost and r.f. performances to microwave semiconductor components for digital cellular and cordless telephones. Furthermore, new circuit techniques which were developed by NTT are presented.
20. Cellular automata modeling of pedestrian's crossing dynamics
Institute of Scientific and Technical Information of China (English)
张晋; 王慧; 李平
Cellular automata modeling techniques and the characteristics of mixed traffic flow were used to derive the 2-dimensional model presented here for simulation of pedestrian's crossing dynamics.A conception of "stop point" is introduced to deal with traffic obstacles and resolve conflicts among pedestrians or between pedestrians and the other vehicles on the crosswalk.The model can be easily extended,is very efficient for simulation of pedestrian's crossing dynamics,can be integrated into traffic simulation software,and has been proved feasible by simulation experiments.
1. Cellular automaton model of crowd evacuation inspired by slime mould (United States)
Kalogeiton, V. S.; Papadopoulos, D. P.; Georgilas, I. P.; Sirakoulis, G. Ch.; Adamatzky, A. I.
In all the living organisms, the self-preservation behaviour is almost universal. Even the most simple of living organisms, like slime mould, is typically under intense selective pressure to evolve a response to ensure their evolution and safety in the best possible way. On the other hand, evacuation of a place can be easily characterized as one of the most stressful situations for the individuals taking part on it. Taking inspiration from the slime mould behaviour, we are introducing a computational bio-inspired model crowd evacuation model. Cellular Automata (CA) were selected as a fully parallel advanced computation tool able to mimic the Physarum's behaviour. In particular, the proposed CA model takes into account while mimicking the Physarum foraging process, the food diffusion, the organism's growth, the creation of tubes for each organism, the selection of optimum tube for each human in correspondence to the crowd evacuation under study and finally, the movement of all humans at each time step towards near exit. To test the model's efficiency and robustness, several simulation scenarios were proposed both in virtual and real-life indoor environments (namely, the first floor of office building B of the Department of Electrical and Computer Engineering of Democritus University of Thrace). The proposed model is further evaluated in a purely quantitative way by comparing the simulation results with the corresponding ones from the bibliography taken by real data. The examined fundamental diagrams of velocity-density and flow-density are found in full agreement with many of the already published corresponding results proving the adequacy, the fitness and the resulting dynamics of the model. Finally, several real Physarum experiments were conducted in an archetype of the aforementioned real-life environment proving at last that the proposed model succeeded in reproducing sufficiently the Physarum's recorded behaviour derived from observation of the aforementioned
2. Multiple origins of life (United States)
Raup, D. M.; Valentine, J. W.
There is some indication that life may have originated readily under primitive earth conditions. If there were multiple origins of life, the result could have been a polyphyletic biota today. Using simple stochastic models for diversification and extinction, we conclude: (1) the probability of survival of life is low unless there are multiple origins, and (2) given survival of life and given as many as 10 independent origins of life, the odds are that all but one would have gone extinct, yielding the monophyletic biota we have now. The fact of the survival of our particular form of life does not imply that it was unique or superior.
3. A Computation in a Cellular Automaton Collider Rule 110
CERN Document Server
Martinez, Genaro J; McIntosh, Harold V
A cellular automaton collider is a finite state machine build of rings of one-dimensional cellular automata. We show how a computation can be performed on the collider by exploiting interactions between gliders (particles, localisations). The constructions proposed are based on universality of elementary cellular automaton rule 110, cyclic tag systems, supercolliders, and computing on rings.
4. Diabetes mellitus: channeling care through cellular discovery. (United States)
Maiese, Kenneth; Shang, Yan Chen; Chong, Zhao Zhong; Hou, Jinling
Diabetes mellitus (DM) impacts a significant portion of the world's population and care for this disorder places an economic burden on the gross domestic product for any particular country. Furthermore, both Type 1 and Type 2 DM are becoming increasingly prevalent and there is increased incidence of impaired glucose tolerance in the young. The complications of DM are protean and can involve multiple systems throughout the body that are susceptible to the detrimental effects of oxidative stress and apoptotic cell injury. For these reasons, innovative strategies are necessary for the implementation of new treatments for DM that are generated through the further understanding of cellular pathways that govern the pathological consequences of DM. In particular, both the precursor for the coenzyme beta-nicotinamide adenine dinucleotide (NAD(+)), nicotinamide, and the growth factor erythropoietin offer novel platforms for drug discovery that involve cellular metabolic homeostasis and inflammatory cell control. Interestingly, these agents and their tightly associated pathways that consist of cell cycle regulation, protein kinase B, forkhead transcription factors, and Wnt signaling also function in a broader sense as biomarkers for disease onset and progression.
5. Cellular traditional Chinese medicine on photobiomodulation (United States)
Liu, Timon Cheng-Yi; Cheng, Lei; Liu, Jiang; Wang, Shuang-Xi; Xu, Xiao-Yang; Deng, Xiao-Yuan; Liu, Song-Hao
Although yin-yang is one of the basic models of traditional Chinese medicine (TCM) for TCM objects such as whole body, five zangs or six fus, they are widely used to discuss cellular processes in papers of famous journals such as Cell, Nature, or Science. In this paper, the concept of the degree of difficulty (DD) of a process was introduced to redefine yin and yang and extend the TCM yin-yang model to the DD yin-yang model so that we have the DD yin-yang inter-transformation, the DD yin-yang antagonism, the DD yin-yang interdependence and the DD yin ping yang mi, which and photobiomodulation (PBM) on cells are supported by each other. It was shown that healthy cells are in the DD yin ping yang mi so that there is no PBM, and there is PBM on non-healthy cells until the cells become healthy so that PBM can be called a cellular rehabilitation. The DD yin-yang inter-transformation holds for our biological information model of PBM. The DD yin-yang antagonism and the DD yin-yang interdependence also hold for a series of experimental studies such as the stimulation of DNA synthesis in HeLa cells after simultaneous irradiation with narrow-band red light and a wide-band cold light, or consecutive irradiation with blue and red light.
6. Alpha-synuclein is a cellular ferrireductase.
Directory of Open Access Journals (Sweden)
Paul Davies
Full Text Available α-synuclein (αS is a cellular protein mostly known for the association of its aggregated forms with a variety of diseases that include Parkinson's disease and Dementia with Lewy Bodies. While the role of αS in disease is well documented there is currently no agreement on the physiological function of the normal isoform of the protein. Here we provide strong evidence that αS is a cellular ferrireductase, responsible for reducing iron (III to bio available iron (II. The recombinant form of the protein has a V(Max of 2.72 nmols/min/mg and K(m 23 µM. This activity is also evident in lysates from neuronal cell lines overexpressing αS. This activity is dependent on copper bound to αS as a cofactor and NADH as an electron donor. Overexpression of α-synuclein by cells significantly increases the percentage of iron (II in cells. The common disease mutations associated with increased susceptibility to PD show no [corrected] differences in activity or iron (II levels. This discovery may well provide new therapeutic targets for PD and Lewy body dementias.
7. Cellular and molecular approaches to memory storage. (United States)
Laroche, S
There has been nearly a century of interest in the idea that information is stored in the brain as changes in the efficacy of synaptic connections on neurons that are activated during learning. The discovery and detailed report of the phenomenon generally known as long-term potentiation opened a new chapter in the study of synaptic plasticity in the vertebrate brain, and this form of synaptic plasticity has now become the dominant model in the search for the cellular bases of learning and memory. To date, considerable progress has been made in understanding the cellular and molecular mechanisms underlying synaptic plasticity and in identifying the neural systems which express it. In parallel, the hypothesis that the mechanisms underlying synaptic plasticity are activated during learning and serve learning and memory has gained much empirical support. Accumulating evidence suggests that the rapid activation of the genetic machinery is a key mechanism underlying the enduring modification of neural networks required for the laying down of memory. These advances are reviewed below.
8. Filovirus tropism: Cellular molecules for viral entry
Directory of Open Access Journals (Sweden)
Ayato eTakada
Full Text Available In human and nonhuman primates, filoviruses (Ebola and Marburg viruses cause severe hemorrhagic fever.Recently, other animals such as pigs and some species of fruit bats have also been shown to be susceptible to these viruses. While having a preference for some cell types such as hepatocytes, endothelial cells, dendritic cells, monocytes, and macrophages, filoviruses are known to be pantropic in infection of primates. The envelope glycoprotein (GP is responsible for both receptor binding and fusion of the virus envelope with the host cell membrane. It has been demonstrated that filovirus GP interacts with multiple molecules for entry into host cells, whereas none of the cellular molecules so far identified as a receptor/coreceptor fully explains filovirus tissue tropism and host range. Available data suggest that the mucin-like region (MLR on GP plays an important role in attachment to the preferred target cells, whose infection is likely involved in filovirus pathogenesis, whereas the MLR is not essential for the fundamental function of the GP in viral entry into cells in vitro. Further studies elucidating the mechanisms of cellular entry of filoviruses may shed light on the development of strategies for prophylaxis and treatment of Ebola and Marburg hemorrhagic fevers.
9. Elements of the Cellular Metabolic Structure
Directory of Open Access Journals (Sweden)
Ildefonso Martínez De La Fuente
Full Text Available A large number of studies have shown the existence of metabolic covalent modifications in different molecular structures, able to store biochemical information that is not encoded by the DNA. Some of these covalent mark patterns can be transmitted across generations (epigenetic changes. Recently, the emergence of Hopfield-like attractor dynamics has been observed in the self-organized enzymatic networks, which have the capacity to store functional catalytic patterns that can be correctly recovered by the specific input stimuli. The Hopfield-like metabolic dynamics are stable and can be maintained as a long-term biochemical memory. In addition, specific molecular information can be transferred from the functional dynamics of the metabolic networks to the enzymatic activity involved in the covalent post-translational modulation so that determined functional memory can be embedded in multiple stable molecular marks. Both the metabolic dynamics governed by Hopfield-type attractors (functional processes and the enzymatic covalent modifications of determined molecules (structural dynamic processes seem to represent the two stages of the dynamical memory of cellular metabolism (metabolic memory. Epigenetic processes appear to be the structural manifestation of this cellular metabolic memory. Here, a new framework for molecular information storage in the cell is presented, which is characterized by two functionally and molecularly interrelated systems: a dynamic, flexible and adaptive system (metabolic memory and an essentially conservative system (genetic memory. The molecular information of both systems seems to coordinate the physiological development of the whole cell.
10. Engineering Cellular Photocomposite Materials Using Convective Assembly
Directory of Open Access Journals (Sweden)
Orlin D. Velev
Full Text Available Fabricating industrial-scale photoreactive composite materials containing living cells, requires a deposition strategy that unifies colloid science and cell biology. Convective assembly can rapidly deposit suspended particles, including whole cells and waterborne latex polymer particles into thin (<10 µm thick, organized films with engineered adhesion, composition, thickness, and particle packing. These highly ordered composites can stabilize the diverse functions of photosynthetic cells for use as biophotoabsorbers, as artificial leaves for hydrogen or oxygen evolution, carbon dioxide assimilation, and add self-cleaning capabilities for releasing or digesting surface contaminants. This paper reviews the non-biological convective assembly literature, with an emphasis on how the method can be modified to deposit living cells starting from a batch process to its current state as a continuous process capable of fabricating larger multi-layer biocomposite coatings from diverse particle suspensions. Further development of this method will help solve the challenges of engineering multi-layered cellular photocomposite materials with high reactivity, stability, and robustness by clarifying how process, substrate, and particle parameters affect coating microstructure. We also describe how these methods can be used to selectively immobilize photosynthetic cells to create biomimetic leaves and compare these biocomposite coatings to other cellular encapsulation systems.
11. Travel Mode Detection Exploiting Cellular Network Data
Directory of Open Access Journals (Sweden)
Kalatian Arash
Full Text Available There has been growing interest in exploiting cellular network data for transportation planning purposes in recent years. In this paper, we utilize these data for determining mode of travel in the city of Shiraz, Iran. Cellular data records -including location updates in 5minute time intervals- of 300,000 users from the city of Shiraz has been collected for 40 hours in three consecutive days in a cooperation with the major telecommunications service provider of the country. Depending on the density of mobile BTS’s in different zones of the city, the user location can be located within an average of 200 meters. Considering data filtering and smoothing, data preparation and converting them to comprehensible traces is a large portion of the work. A novel approach to identify stay locations is proposed and implemented in this paper. Origin-Destination matrices are then created based on trips detected, which shows acceptable consistency with current O-D matrices. Finally, Travel times for all trips of a user is estimated as the main attribute for clustering. Trips between same origin and destination zones are combined together in a group. Using K-means algorithm, records within each group are the portioned in two or three clusters, based on their travel speeds. Each cluster represents a certain mode of travel; walking, public transportation or driving a private car.
Directory of Open Access Journals (Sweden)
Yousef Mehdipour
Full Text Available Measuring customer satisfaction provides an indication of how successful the organization is at providing products and/or services to the marketplace. Customer satisfaction is a collective outcome of perception, evaluation, and psychological reactions to the consumption experience with a product or service. This researcharticle investigated the attitude of Idea cellular customers to Idea services. All the customers of Idea cellular in Hyderabad city (Andhra Pradesh constituted the population. The sample of the study is 2000 customers that randomly selected. A questionnaire was developed and validated through pilot testing and administered to thesample for the collection of data. The researcher personally visited respondents, thus 100% data were collected.The collected data were tabulated and analyzed by SPSS. Results showed that majority of the respondents of Idea prefer post-paid service than to pre paid and largest segment of respondents are of idea then comes Cell one, Airtel and Vodafone. this study showed that most of the respondents need improvement in service. Majority of respondents gave an excellent rate for “Idea Cellular” services.
13. Tension and robustness in multitasking cellular networks.
Directory of Open Access Journals (Sweden)
Jeffrey V Wong
Full Text Available Cellular networks multitask by exhibiting distinct, context-dependent dynamics. However, network states (parameters that generate a particular dynamic are often sub-optimal for others, defining a source of "tension" between them. Though multitasking is pervasive, it is not clear where tension arises, what consequences it has, and how it is resolved. We developed a generic computational framework to examine the source and consequences of tension between pairs of dynamics exhibited by the well-studied RB-E2F switch regulating cell cycle entry. We found that tension arose from task-dependent shifts in parameters associated with network modules. Although parameter sets common to distinct dynamics did exist, tension reduced both their accessibility and resilience to perturbation, indicating a trade-off between "one-size-fits-all" solutions and robustness. With high tension, robustness can be preserved by dynamic shifting of modules, enabling the network to toggle between tasks, and by increasing network complexity, in this case by gene duplication. We propose that tension is a general constraint on the architecture and operation of multitasking biological networks. To this end, our work provides a framework to quantify the extent of tension between any network dynamics and how it affects network robustness. Such analysis would suggest new ways to interfere with network elements to elucidate the design principles of cellular networks.
14. Rhabdomyosarcoma: Advances in Molecular and Cellular Biology
Directory of Open Access Journals (Sweden)
Xin Sun
Full Text Available Rhabdomyosarcoma (RMS is the most common soft tissue malignancy in childhood and adolescence. The two major histological subtypes of RMS are alveolar RMS, driven by the fusion protein PAX3-FKHR or PAX7-FKHR, and embryonic RMS, which is usually genetically heterogeneous. The prognosis of RMS has improved in the past several decades due to multidisciplinary care. However, in recent years, the treatment of patients with metastatic or refractory RMS has reached a plateau. Thus, to improve the survival rate of RMS patients and their overall well-being, further understanding of the molecular and cellular biology of RMS and identification of novel therapeutic targets are imperative. In this review, we describe the most recent discoveries in the molecular and cellular biology of RMS, including alterations in oncogenic pathways, miRNA (miR, in vivo models, stem cells, and important signal transduction cascades implicated in the development and progression of RMS. Furthermore, we discuss novel potential targeted therapies that may improve the current treatment of RMS.
15. Cellular receptors for plasminogen activators recent advances. (United States)
Ellis, V
The generation of the broad-specificity protease plasmin by the plasminogen activators urokinase-type plasminogen activator (uPA) and tissue-type plasminogen activator (tPA) is implicated in a variety of pathophysiological processes, including vascular fibrin dissolution, extracellular matrix degradation and remodeling, and cell migration. A mechanism for the regulation of plasmin generation is through binding of the plasminogen activators to specific cellular receptors: uPA to the glycolipid-anchored membrane protein urokinase-type plasminogen activator receptor (uPAR) and tPA to a number of putative binding sites. The uPA-uPAR complex can interact with a variety of ligands, including plasminogen, vitronectin, and integrins, indicating a multifunctional role for uPAR, regulating not only efficient and spatially restricted plasmin generation but also having the potential to modulate cell adhesion and signal transduction. The cellular binding of tPA, although less well characterized, also has the capacity to regulate plasmin generation and to play a significant role in vessel-wall biology. (Trends Cardiovasc Med 1997;7:227-234). © 1997, Elsevier Science Inc.
16. Dynamics of active cellular response under stress (United States)
de, Rumi; Zemel, Assaf; Safran, Samuel
Forces exerted by and on adherent cells are important for many physiological processes such as wound healing and tissue formation. In addition, recent experiments have shown that stem cell differentiation is controlled, at least in part, by the elasticity of the surrounding matrix. Using a simple theoretical model that includes the forces due to both the mechanosensitive nature of cells and the elastic response of the matrix, we predict the dynamics of orientation of cells. The model predicts many features observed in measurements of cellular forces and orientation including the increase with time of the forces generated by cells in the absence of applied stress and the consequent decrease of the force in the presence of quasi-static stresses. We also explain the puzzling observation of parallel alignment of cells for static and quasi-static stresses and of nearly perpendicular alignment for dynamically varying stresses. In addition, we predict the response of the cellular orientation to a sinusoidally varying applied stress as a function of frequency. The dependence of the cell orientation angle on the Poisson ratio of the surrounding material can be used to distinguish systems in which cell activity is controlled by stress from those where cell activity is controlled by strain. Reference: Nature Physics, vol. 3, pp 655 (2007).
17. Cellular contractility requires ubiquitin mediated proteolysis.
Directory of Open Access Journals (Sweden)
Yuval Cinnamon
Full Text Available BACKGROUND: Cellular contractility, essential for cell movement and proliferation, is regulated by microtubules, RhoA and actomyosin. The RhoA dependent kinase ROCK ensures the phosphorylation of the regulatory Myosin II Light Chain (MLC Ser19, thereby activating actomyosin contractions. Microtubules are upstream inhibitors of contractility and their depolymerization or depletion cause cells to contract by activating RhoA. How microtubule dynamics regulates RhoA remains, a major missing link in understanding contractility. PRINCIPAL FINDINGS: We observed that contractility is inhibited by microtubules not only, as previously reported, in adherent cells, but also in non-adhering interphase and mitotic cells. Strikingly we observed that contractility requires ubiquitin mediated proteolysis by a Cullin-RING ubiquitin ligase. Inhibition of proteolysis, ubiquitination and neddylation all led to complete cessation of contractility and considerably reduced MLC Ser19 phosphorylation. CONCLUSIONS: Our results imply that cells express a contractility inhibitor that is degraded by ubiquitin mediated proteolysis, either constitutively or in response to microtubule depolymerization. This degradation seems to depend on a Cullin-RING ubiquitin ligase and is required for cellular contractions.
Directory of Open Access Journals (Sweden)
Tomasz Garbacz
Full Text Available In a study of cellular injection, molding process uses polyvinylchloride PVC. Polymers modified with introducing blowing agents into them in the Laboratory of the Department of Technologies and Materiase of Technical University of Kosice. For technological reasons, blowing agents have a form of granules. In the experiment, the content of the blowing agent (0–2,0 % by mass fed into the processed polymer was adopted as a variable factor. In the studies presented in the article, the chemical blowing agents occurring in the granulated form with a diameter of 1.2 to 1.4 mm were used. The view of the technological line for cellular injection molding and injection mold cavity with injection moldings are shown in Figure 1. The results of the determination of selected properties of injection molded parts for various polymeric materials, obtained with different content of blowing agents, are shown in Figures 4-7. Microscopic examination of cross-sectional structure of the moldings were obtained using the author's position image analysis of porous structure. Based on analysis of photographs taken (Figures 7, 8, 9 it was found that the coating containing 1.0% of blowing agents is a clearly visible solid outer layer and uniform distribution of pores and their sizes are similar.
19. Mechanisms of cellular invasion by intracellular parasites. (United States)
Walker, Dawn M; Oghumu, Steve; Gupta, Gaurav; McGwire, Bradford S; Drew, Mark E; Satoskar, Abhay R
Numerous disease-causing parasites must invade host cells in order to prosper. Collectively, such pathogens are responsible for a staggering amount of human sickness and death throughout the world. Leishmaniasis, Chagas disease, toxoplasmosis, and malaria are neglected diseases and therefore are linked to socio-economical and geographical factors, affecting well-over half the world's population. Such obligate intracellular parasites have co-evolved with humans to establish a complexity of specific molecular parasite-host cell interactions, forming the basis of the parasite's cellular tropism. They make use of such interactions to invade host cells as a means to migrate through various tissues, to evade the host immune system, and to undergo intracellular replication. These cellular migration and invasion events are absolutely essential for the completion of the lifecycles of these parasites and lead to their for disease pathogenesis. This review is an overview of the molecular mechanisms of protozoan parasite invasion of host cells and discussion of therapeutic strategies, which could be developed by targeting these invasion pathways. Specifically, we focus on four species of protozoan parasites Leishmania, Trypanosoma cruzi, Plasmodium, and Toxoplasma, which are responsible for significant morbidity and mortality.
20. Characteristics of cellular composition of periodontal pockets (United States)
Hasiuk, Petro; Hasiuk, Nataliya; Kindiy, Dmytro; Ivanchyshyn, Victoriya; Kalashnikov, Dmytro; Zubchenko, Sergiy
Purpose The development of inflammatory periodontal disease in young people is an urgent problem of today's periodontology, and requires a development of new methods that would give an opportunity not only to diagnose but also for prognosis of periodontitis course in a given patients contingent. Results Cellular structure of periodontal pockets is presented by hematogenous and epithelial cells. Our results are confirmed by previous studies, and show that the penetration of periodontal pathogens leads to formation in periodontal tissue of a highly active complex compounds—cytokines that are able to modify the activity of neutrophils and reduce their specific antibacterial properties. Cytokines not only adversely affect the periodontal tissues, but also cause further activation of cells that synthesized them, and inhibit tissue repair and process of resynthesis of connective tissue by fibroblasts. Conclusion Neutrophilic granulocytes present in each of the types of smear types, but their functional status and quantitative composition is different. The results of our cytological study confirmed the results of immunohistochemical studies, and show that in generalized periodontitis, an inflammatory cellular elements with disorganized epithelial cells and connective tissue of the gums and periodontium, and bacteria form specific types of infiltration in periodontal tissues. PMID:28180007
1. The GARP complex is required for cellular sphingolipid homeostasis (United States)
Fröhlich, Florian; Petit, Constance; Kory, Nora; Christiano, Romain; Hannibal-Bach, Hans-Kristian; Graham, Morven; Liu, Xinran; Ejsing, Christer S; Farese, Robert V; Walther, Tobias C
Sphingolipids are abundant membrane components and important signaling molecules in eukaryotic cells. Their levels and localization are tightly regulated. However, the mechanisms underlying this regulation remain largely unknown. In this study, we identify the Golgi-associated retrograde protein (GARP) complex, which functions in endosome-to-Golgi retrograde vesicular transport, as a critical player in sphingolipid homeostasis. GARP deficiency leads to accumulation of sphingolipid synthesis intermediates, changes in sterol distribution, and lysosomal dysfunction. A GARP complex mutation analogous to a VPS53 allele causing progressive cerebello-cerebral atrophy type 2 (PCCA2) in humans exhibits similar, albeit weaker, phenotypes in yeast, providing mechanistic insights into disease pathogenesis. Inhibition of the first step of de novo sphingolipid synthesis is sufficient to mitigate many of the phenotypes of GARP-deficient yeast or mammalian cells. Together, these data show that GARP is essential for cellular sphingolipid homeostasis and suggest a therapeutic strategy for the treatment of PCCA2. DOI: PMID:26357016
2. Cellular angiofibroma in women: a review of the literature. (United States)
Mandato, Vincenzo Dario; Santagni, Susanna; Cavazza, Alberto; Aguzzoli, Lorenzo; Abrate, Martino; La Sala, Giovanni Battista
Cellular Angiofibroma (CA) represents a quite recently described mesenchymal tumour that occurs in both genders, in particular in the vulvo-vaginal region in women and in the inguino-scrotal area in men. The first description of this tumour dates from Nucci et al. article in 1997; since then, the literature reports different reviews and case report of this tumour in both genders, but no article specifically addressing CA treatment and follow-up in women. In this review we collected all 79 published female CA cases, analyzing the clinical, pathological and immunohistochemical features of the tumour.CA affects women mostly during the fifth decade of life, it is generally a small and asymptomatic mass that mainly arises in the vulvo-vaginal region, although there are reported pelvic and extra-pelvic cases. The treatment requires a simple local excision due to an extremely low ability to recurrent locally and no chance to metastasize. Throughout the immunohistochemical and pathological findings it is also easily possible a differential diagnosis from the other soft tissue tumours which affect the vulvo-vaginal area, such as spindle cell lipoma, solitary fibrous tumour, angiomyofibroblastoma and aggressive angiomyxoma.
3. Biofabrication and testing of a fully cellular nerve graft. (United States)
Owens, Christopher M; Marga, Francoise; Forgacs, Gabor; Heesch, Cheryl M
Rupture of a nerve is a debilitating injury with devastating consequences for the individual's quality of life. The gold standard of repair is the use of an autologous graft to bridge the severed nerve ends. Such repair however involves risks due to secondary surgery at the donor site and may result in morbidity and infection. Thus the clinical approach to repair often involves non-cellular solutions, grafts composed of synthetic or natural materials. Here we report on a novel approach to biofabricate fully biological grafts composed exclusively of cells and cell secreted material. To reproducibly and reliably build such grafts of composite geometry we use bioprinting. We test our grafts in a rat sciatic nerve injury model for both motor and sensory function. In particular we compare the regenerative capacity of the biofabricated grafts with that of autologous grafts and grafts made of hollow collagen tubes by measuring the compound action potential (for motor function) and the change in mean arterial blood pressure as consequence of electrically eliciting the somatic pressor reflex. Our results provide evidence that bioprinting is a promising approach to nerve graft fabrication and as a consequence to nerve regeneration.
4. Super-Resolution Microscopy: Shedding Light on the Cellular Plasma Membrane. (United States)
Stone, Matthew B; Shelby, Sarah A; Veatch, Sarah L
Lipids and the membranes they form are fundamental building blocks of cellular life, and their geometry and chemical properties distinguish membranes from other cellular environments. Collective processes occurring within membranes strongly impact cellular behavior and biochemistry, and understanding these processes presents unique challenges due to the often complex and myriad interactions between membrane components. Super-resolution microscopy offers a significant gain in resolution over traditional optical microscopy, enabling the localization of individual molecules even in densely labeled samples and in cellular and tissue environments. These microscopy techniques have been used to examine the organization and dynamics of plasma membrane components, providing insight into the fundamental interactions that determine membrane functions. Here, we broadly introduce the structure and organization of the mammalian plasma membrane and review recent applications of super-resolution microscopy to the study of membranes. We then highlight some inherent challenges faced when using super-resolution microscopy to study membranes, and we discuss recent technical advancements that promise further improvements to super-resolution microscopy and its application to the plasma membrane.
5. A single-cell bioluminescence imaging system for monitoring cellular gene expression in a plant body. (United States)
Muranaka, Tomoaki; Kubota, Saya; Oyama, Tokitaka
Gene expression is a fundamental cellular process and expression dynamics are of great interest in life science. We succeeded in monitoring cellular gene expression in a duckweed plant, Lemna gibba, using bioluminescent reporters. Using particle bombardment, epidermal and mesophyll cells were transfected with the luciferase gene (luc+) under the control of a constitutive [Cauliflower mosaic virus 35S (CaMV35S)] and a rhythmic [Arabidopsis thaliana CIRCADIAN CLOCK ASSOCIATED 1 (AtCCA1)] promoter. Bioluminescence images were captured using an EM-CCD (electron multiply charged couple device) camera. Luminescent spots of the transfected cells in the plant body were quantitatively measured at the single-cell level. Luminescence intensities varied over a 1,000-fold range among CaMV35S::luc+-transfected cells in the same plant body and showed a log-normal-like frequency distribution. We monitored cellular gene expression under light-dark conditions by capturing bioluminescence images every hour. Luminescence traces of ≥50 individual cells in a frond were successfully obtained in each monitoring procedure. Rhythmic and constitutive luminescence behaviors were observed in cells transfected with AtCCA1::luc+ and CaMV35S::luc+, respectively. Diurnal rhythms were observed in every AtCCA1::luc+-introduced cell with traceable luminescence, and slight differences were detected in their rhythmic waveforms. Thus the single-cell bioluminescence monitoring system was useful for the characterization of cellular gene expression in a plant body.
6. Life cycle assessment (LCA)
DEFF Research Database (Denmark)
Thrane, Mikkel; Schmidt, Jannick Andresen
The chapter introduces Life Cycle Assessment (LCA) and its application according to the ISO 1404043 standards.......The chapter introduces Life Cycle Assessment (LCA) and its application according to the ISO 1404043 standards....
7. Go4Life
Medline Plus
Full Text Available ... This item has been hidden Go4Life Exercises — Building Strength Play all Try some of these Go4Life exercises to improve your strength! Stronger muscles can make it easier to do ...
CERN Document Server
Guinot, Genevieve
Balancing work and home life, getting support for your family and thriving in an inclusive and respectful workplace: find out more about the support structures in place to enhance your working life@CERN!
9. Benchmark study between FIDAP and a cellular automata code (United States)
Akau, R. L.; Stockman, H. W.
A fluid flow benchmark exercise was conducted to compare results between a cellular automata code and FIDAP. Cellular automata codes are free from gridding constraints, and are generally used to model slow (Reynolds number approximately 1) flows around complex solid obstacles. However, the accuracy of cellular automata codes at higher Reynolds numbers, where inertial terms are significant, is not well-documented. In order to validate the cellular automata code, two fluids problems were investigated. For both problems, flow was assumed to be laminar, two-dimensional, isothermal, incompressible and periodic. Results showed that the cellular automata code simulated the overall behavior of the flow field.
10. Life Among the Stars (United States)
MOSAIC, 1977
Explores possibility of extra-terrestrial life, reviewing current hypotheses regarding where in space life would most likely occur. Discusses astrometry and spectroscopy as methods for determining stellar motions. Describes United States and Soviet projects for receiving stellar communications. Relates origin of life on earth to observed high…
11. HIV Life Cycle (United States)
HIV Overview The HIV Life Cycle (Last updated 9/13/2016; last reviewed 9/8/2016) Key Points HIV gradually destroys the immune ... life cycle. What is the connection between the HIV life cycle and HIV medicines? Antiretroviral therapy (ART) ...
12. Go4Life (United States)
... more This item has been hidden Go4Life Exercises — Stretching Play all Try some of these Go4Life exercises ... how to do lower body exercises in her office with Go4Life's Toe Stand. CC 1:28 Play ...
13. Go4Life
Medline Plus
14. End of Life Issues (United States)
Planning for the end of life can be difficult. But by deciding what end-of-life care best suits your needs when you are healthy, you can ... right choices when the time comes. End-of-life planning usually includes making choices about the following: ...
15. Deciding about treatments that prolong life (United States)
16. Oxidative stress action in cellular aging
Directory of Open Access Journals (Sweden)
Monique Cristine de Oliveira
Full Text Available Various theories try to explain the biological aging by changing the functions and structure of organic systems and cells. During lifetime, free radicals in the oxidative stress lead to lipid peroxidation of cellular membranes, homeostasis imbalance, chemical residues formation, gene mutations in DNA, dysfunction of certain organelles, and the arise of diseases due to cell death and/or injury. This review describes the action of oxidative stress in the cells aging process, emphasizing the factors such as cellular oxidative damage, its consequences and the main protective measures taken to prevent or delay this process. Tests with antioxidants: vitamins A, E and C, flavonoids, carotenoids and minerals, the practice of caloric restriction and physical exercise, seeking the beneficial effects on human health, increasing longevity, reducing the level of oxidative stress, slowing the cellular senescence and origin of certain diseases, are discussed.Diferentes teorias tentam explicar o envelhecimento biológico através da alteração das funções e estrutura dos sistemas orgânicos e células. Ao longo da vida, os radicais livres presentes no estresse oxidativo conduzem à peroxidação dos lipídios das membranas celulares, desequilíbrio da homeostase, formação de resíduos químicos, mutações gênicas no DNA, disfunção de certas organelas, bem como ao surgimento de doenças devido à lesão e/ou morte celular. Nesta revisão descreve-se a ação do estresse oxidativo no processo de envelhecimento das células, enfatizando fatores como os danos oxidativos celulares, suas conseqüências e as principais medidas protetoras adotadas para se prevenir ou retardar este processo. Testes com antioxidantes: vitaminas A, E e C, flavonóides, carotenóides e minerais; a prática de restrição calórica e exercícios físicos, que buscam efeitos benéficos sobre a saúde humana, aumentando a longevidade, reduzindo o nível de estresse oxidativo
17. Multiplex assay for live-cell monitoring of cellular fates of amyloid-β precursor protein (APP.
Directory of Open Access Journals (Sweden)
Maria Merezhko
Full Text Available Amyloid-β precursor protein (APP plays a central role in pathogenesis of Alzheimer's disease. APP has a short half-life and undergoes complex proteolytic processing that is highly responsive to various stimuli such as changes in cellular lipid or energy homeostasis. Cellular trafficking of APP is controlled by its large protein interactome, including dozens of cytosolic adaptor proteins, and also by interactions with lipids. Currently, cellular regulation of APP is mostly studied based on appearance of APP-derived proteolytic fragments to conditioned media and cellular extracts. Here, we have developed a novel live-cell assay system based on several indirect measures that reflect altered APP trafficking and processing in cells. Protein-fragment complementation assay technology for detection of APP-BACE1 protein-protein interaction forms the core of the new assay. In a multiplex form, the assay can measure four endpoints: total cellular APP level, total secreted sAPP level in media, APP-BACE1 interaction in cells and in exosomes released by the cells. Functional validation of the assay with pharmacological and genetic tools revealed distinct patterns of cellular fates of APP, with immediate mechanistic implications. This new technology will facilitate functional genomics studies of late-onset Alzheimer's disease, drug discovery efforts targeting APP and characterization of the physiological functions of APP and its proteolytic fragments.
18. Am I Halfway? Life Lived = Expected Life
DEFF Research Database (Denmark)
Canudas-Romo, Vladimir; Zarulli, Virginia
in 2010 this might be as high as 10 years more. The stage of midlife has always been considered an important step in the life of human beings. However, there is no agreement on which is the age or age-range that represents the middle phase. Here we have further added the notion that halfway......“Nel mezzo del cammin di nostra vita, Mi ritrovai per una selva oscura, Ché la diritta via era smarrita. [In the middle of the journey of our life, I came to myself in a dark wood, for the straight way was lost.]” (Dante 1308-1320) We have reached halfway in life when our age equals our remaining...... life expectancy at that age. This relationship in stable population models between life lived and life left has captured the attention of mathematical demographers since Lotka. Our paper aims to contribute to the halfway-age debate by showing its time trends under mortality models and with current data...
Directory of Open Access Journals (Sweden)
Ahmet YILDIRIM
Full Text Available Individuals in terms of the economy in which we live is one of the most important phenomenon of the century. This phenomenon present itself as the only determinant of people's lives by entering almost makes itself felt. The mo st obvious objective needs of the economy by triggering motive is to induce people to consume . Consumer culture pervades all aspects of the situation are people . Therefore, these people have the blessing of culture , beauty and value all in the name of w hatever is consumed. This is way out of the siege of moral and religious values we have is to go back again . Referred by local cultural and religious values, based on today increasingly come to the fore and the Muslim way of life appears to be close to th e plain / lean preferred by many people life has been a way of life. Even the simple life , a way of life in the Western world , a conception of life , a philosophy, a movement as it has become widely accepted. Here in determining the Muslim way of life Pr ophet. Prophet (sa lived the kind of life a very important model, sample, and determining which direction is known. Religious values, which is the carrier of the prophets, sent to the society they have always been examples and models. Because every aspect of human life, his life style and the surrounding area has a feature. We also value his life that he has unknowingly and without learning and skills and to understand it is not possible to live our religion . We also our presentation, we mainly of Islam o utlook on life and predicted life - style, including the Prophet of Islam 's (sa simple life to scrutinize and lifestyle issues related to reveal , in short Islam's how life has embraced and the Prophet. Prophet's will try to find answers to questions reg arding how to live.
20. Agent-Based Modeling of Mitochondria Links Sub-Cellular Dynamics to Cellular Homeostasis and Heterogeneity (United States)
Dalmasso, Giovanni; Marin Zapata, Paula Andrea; Brady, Nathan Ryan; Hamacher-Brady, Anne
Mitochondria are semi-autonomous organelles that supply energy for cellular biochemistry through oxidative phosphorylation. Within a cell, hundreds of mobile mitochondria undergo fusion and fission events to form a dynamic network. These morphological and mobility dynamics are essential for maintaining mitochondrial functional homeostasis, and alterations both impact and reflect cellular stress states. Mitochondrial homeostasis is further dependent on production (biogenesis) and the removal of damaged mitochondria by selective autophagy (mitophagy). While mitochondrial function, dynamics, biogenesis and mitophagy are highly-integrated processes, it is not fully understood how systemic control in the cell is established to maintain homeostasis, or respond to bioenergetic demands. Here we used agent-based modeling (ABM) to integrate molecular and imaging knowledge sets, and simulate population dynamics of mitochondria and their response to environmental energy demand. Using high-dimensional parameter searches we integrated experimentally-measured rates of mitochondrial biogenesis and mitophagy, and using sensitivity analysis we identified parameter influences on population homeostasis. By studying the dynamics of cellular subpopulations with distinct mitochondrial masses, our approach uncovered system properties of mitochondrial populations: (1) mitochondrial fusion and fission activities rapidly establish mitochondrial sub-population homeostasis, and total cellular levels of mitochondria alter fusion and fission activities and subpopulation distributions; (2) restricting the directionality of mitochondrial mobility does not alter morphology subpopulation distributions, but increases network transmission dynamics; and (3) maintaining mitochondrial mass homeostasis and responding to bioenergetic stress requires the integration of mitochondrial dynamics with the cellular bioenergetic state. Finally, (4) our model suggests sources of, and stress conditions amplifying
1. Multipartite cellular automata and the superposition principle (United States)
Elze, Hans-Thomas
Cellular automata (CA) can show well known features of quantum mechanics (QM), such as a linear updating rule that resembles a discretized form of the Schrödinger equation together with its conservation laws. Surprisingly, a whole class of “natural” Hamiltonian CA, which are based entirely on integer-valued variables and couplings and derived from an action principle, can be mapped reversibly to continuum models with the help of sampling theory. This results in “deformed” quantum mechanical models with a finite discreteness scale l, which for l→0 reproduce the familiar continuum limit. Presently, we show, in particular, how such automata can form “multipartite” systems consistently with the tensor product structures of non-relativistic many-body QM, while maintaining the linearity of dynamics. Consequently, the superposition principle is fully operative already on the level of these primordial discrete deterministic automata, including the essential quantum effects of interference and entanglement.
2. Complex cellular responses to reactive oxygen species. (United States)
Temple, Mark D; Perrone, Gabriel G; Dawes, Ian W
Genome-wide analyses of yeast provide insight into cellular responses to reactive oxygen species (ROS). Many deletion mutants are sensitive to at least one ROS, but no one oxidant is representative of 'oxidative stress' despite the widespread use of a single compound such as H(2)O(2). This has major implications for studies of pathological situations. Cells have a range of mechanisms for maintaining resistance that involves either induction or repression of many genes and extensive remodeling of the transcriptome. Cells have constitutive defense systems that are largely unique to each oxidant, but overlapping, inducible repair systems. The pattern of the transcriptional response to a particular ROS depends on its concentration, and 'classical' antioxidant systems that are induced by high concentrations of ROS can be repressed when cells adapt to low concentrations of ROS.
3. Knowledge discovery for geographical cellular automata
Institute of Scientific and Technical Information of China (English)
LI Xia; Anthony Gar-On Yeh
This paper proposes a new method for geographical simulation by applying data mining techniques to cellular automata. CA has strong capabilities in simulating complex systems. The core of CA is how to define transition rules. There are no good methods for defining these transition rules. They are usually defined by using heuristic methods and thus subject to uncertainties. Mathematical equations are used to represent transition rules implicitly and have limitations in capturing complex relationships. This paper demonstrates that the explicit transition rules of CA can be automatically reconstructed through the rule induction procedure of data mining. The proposed method can reduce the influences of individual knowledge and preferences in defining transition rules and generate more reliable simulation results. It can efficiently discover knowledge from a vast volume of spatial data.
4. Exactly solvable cellular automaton traffic jam model. (United States)
Kearney, Michael J
A detailed study is undertaken of the v{max}=1 limit of the cellular automaton traffic model proposed by Nagel and Paczuski [Phys. Rev. E 51, 2909 (1995)]. The model allows one to analyze the behavior of a traffic jam initiated in an otherwise freely flowing stream of traffic. By mapping onto a discrete-time queueing system, itself related to various problems encountered in lattice combinatorics, exact results are presented in relation to the jam lifetime, the maximum jam length, and the jam mass (the space-time cluster size or integrated vehicle waiting time), both in terms of the critical and the off-critical behavior. This sets existing scaling results in their natural context and also provides several other interesting results in addition.
5. Inhibitors of the Cellular Trafficking of Ricin
Directory of Open Access Journals (Sweden)
Daniel Gillet
Full Text Available Throughout the last decade, efforts to identify and develop effective inhibitors of the ricin toxin have focused on targeting its N-glycosidase activity. Alternatively, molecules disrupting intracellular trafficking have been shown to block ricin toxicity. Several research teams have recently developed high-throughput phenotypic screens for small molecules acting on the intracellular targets required for entry of ricin into cells. These screens have identified inhibitory compounds that can protect cells, and sometimes even animals against ricin. We review these newly discovered cellular inhibitors of ricin intoxication, discuss the advantages and drawbacks of chemical-genetics approaches, and address the issues to be resolved so that the therapeutic development of these small-molecule compounds can progress.
6. Simulation of earthquakes with cellular automata
Directory of Open Access Journals (Sweden)
P. G. Akishin
7. Partitioned quantum cellular automata are intrinsically universal
CERN Document Server
Arrighi, Pablo
There have been several non-axiomatic approaches taken to define Quantum Cellular Automata (QCA). Partitioned QCA (PQCA) are the most canonical of these non-axiomatic definitions. In this work we show that any QCA can be put into the form of a PQCA. Our construction reconciles all the non-axiomatic definitions of QCA, showing that they can all simulate one another, and hence that they are all equivalent to the axiomatic definition. This is achieved by defining generalised n-dimensional intrinsic simulation, which brings the computer science based concepts of simulation and universality closer to theoretical physics. The result is not only an important simplification of the QCA model, it also plays a key role in the identification of a minimal n-dimensional intrinsically universal QCA.
8. Particles and Patterns in Cellular Automata
Energy Technology Data Exchange (ETDEWEB)
Jen, E.; Das, R.; Beasley, C.E.
This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). Our objective has been to develop tools for studying particle interactions in a class of dynamical systems characterized by discreteness, determinism, local interaction, and an inherently parallel form of evolution. These systems can be described by cellular automata (CA) and the behavior we studied has improved our understanding of the nature of patterns generated by CAs, their ability to perform global computations, and their relationship to continuous dynamical systems. We have also developed a rule-table mathematics that enables one to custom-design CA rule tables to generate patterns of specified types, or to perform specified computational tasks.
9. Protein S-palmitoylation in cellular differentiation (United States)
Zhang, Mingzi M.
Reversible protein S-palmitoylation confers spatiotemporal control of protein function by modulating protein stability, trafficking and activity, as well as protein–protein and membrane–protein associations. Enabled by technological advances, global studies revealed S-palmitoylation to be an important and pervasive posttranslational modification in eukaryotes with the potential to coordinate diverse biological processes as cells transition from one state to another. Here, we review the strategies and tools to analyze in vivo protein palmitoylation and interrogate the functions of the enzymes that put on and take off palmitate from proteins. We also highlight palmitoyl proteins and palmitoylation-related enzymes that are associated with cellular differentiation and/or tissue development in yeasts, protozoa, mammals, plants and other model eukaryotes. PMID:28202682
10. Mathematical analysis of complex cellular activity
CERN Document Server
Bertram, Richard; Teka, Wondimu; Vo, Theodore; Wechselberger, Martin; Kirk, Vivien; Sneyd, James
This book contains two review articles on mathematical physiology that deal with closely related topics but were written and can be read independently. The first article reviews the basic theory of calcium oscillations (common to almost all cell types), including spatio-temporal behaviors such as waves. The second article uses, and expands on, much of this basic theory to show how the interaction of cytosolic calcium oscillators with membrane ion channels can result in highly complex patterns of electrical spiking. Through these examples one can see clearly how multiple oscillatory processes interact within a cell, and how mathematical methods can be used to understand such interactions better. The two reviews provide excellent examples of how mathematics and physiology can learn from each other, and work jointly towards a better understanding of complex cellular processes. Review 1: Richard Bertram, Joel Tabak, Wondimu Teka, Theodore Vo, Martin Wechselberger: Geometric Singular Perturbation Analysis of Burst...
11. Determining Lineage Pathways from Cellular Barcoding Experiments
Directory of Open Access Journals (Sweden)
Leïla Perié
Full Text Available Cellular barcoding and other single-cell lineage-tracing strategies form experimental methodologies for analysis of in vivo cell fate that have been instrumental in several significant recent discoveries. Due to the highly nonlinear nature of proliferation and differentiation, interrogation of the resulting data for evaluation of potential lineage pathways requires a new quantitative framework complete with appropriate statistical tests. Here, we develop such a framework, illustrating its utility by analyzing data from barcoded multipotent cells of the blood system. This application demonstrates that the data require additional paths beyond those found in the classical model, which leads us to propose that hematopoietic differentiation follows a loss of potential mechanism and to suggest further experiments to test this deduction. Our quantitative framework can evaluate the compatibility of lineage trees with barcoded data from any proliferating and differentiating cell system.
12. Cellular senescence mediates fibrotic pulmonary disease (United States)
Schafer, Marissa J.; White, Thomas A.; Iijima, Koji; Haak, Andrew J.; Ligresti, Giovanni; Atkinson, Elizabeth J.; Oberg, Ann L.; Birch, Jodie; Salmonowicz, Hanna; Zhu, Yi; Mazula, Daniel L.; Brooks, Robert W.; Fuhrmann-Stroissnigg, Heike; Pirtskhalava, Tamar; Prakash, Y. S.; Tchkonia, Tamara; Robbins, Paul D.; Aubry, Marie Christine; Passos, João F.; Kirkland, James L.; Tschumperlin, Daniel J.; Kita, Hirohito; LeBrasseur, Nathan K.
Idiopathic pulmonary fibrosis (IPF) is a fatal disease characterized by interstitial remodelling, leading to compromised lung function. Cellular senescence markers are detectable within IPF lung tissue and senescent cell deletion rejuvenates pulmonary health in aged mice. Whether and how senescent cells regulate IPF or if their removal may be an efficacious intervention strategy is unknown. Here we demonstrate elevated abundance of senescence biomarkers in IPF lung, with p16 expression increasing with disease severity. We show that the secretome of senescent fibroblasts, which are selectively killed by a senolytic cocktail, dasatinib plus quercetin (DQ), is fibrogenic. Leveraging the bleomycin-injury IPF model, we demonstrate that early-intervention suicide-gene-mediated senescent cell ablation improves pulmonary function and physical health, although lung fibrosis is visibly unaltered. DQ treatment replicates benefits of transgenic clearance. Thus, our findings establish that fibrotic lung disease is mediated, in part, by senescent cells, which can be targeted to improve health and function. PMID:28230051
13. Cellular regulation of the dopamine transporter
DEFF Research Database (Denmark)
Eriksen, Jacob
-membrane spanning protein Tac, thereby creating an extracellular antibody epitope. Upon expression in HEK293 cells this TacDAT fusion protein displayed functional properties similar to the wild type transporter. In an ELISA based internalization assay, TacDAT intracellular accumulation was increased by inhibitors......The dopamine transporter (DAT) mediates reuptake of dopamine from the synaptic cleft and is a target for widely abused psychostimulants such as cocaine and amphetamine. Nonetheless, little is known about the cellular distribution and trafficking of natively expressed DAT. DAT and its trafficking...... to natively expressed transporter, DAT was visualized directly in cultured DA neurons using the fluorescent cocaine analog JHC 1-64. These data showed pronounced colocalization upon constitutive internalization with Lysotracker, a late endosomal/lysosomal marker; however only little cololization was observed...
14. A cellular automata model for ant trails
Indian Academy of Sciences (India)
Sibel Gokce; Ozhan Kayacan
In this study, the unidirectional ant traffic flow with U-turn in an ant trail was investigated using one-dimensional cellular automata model. It is known that ants communicate with each other by dropping a chemical, called pheromone, on the substrate. Apart from the studies in the literature, it was considered in the model that (i) ant colony consists of two kinds of ants, goodand poor-smelling ants, (ii) ants might make U-turn for some special reasons. For some values of densities of good- and poor-smelling ants, the flux and mean velocity of the colony were studied as a function of density and evaporation rate of pheromone.
15. Computing by Temporal Order: Asynchronous Cellular Automata
Directory of Open Access Journals (Sweden)
Michael Vielhaber
Full Text Available Our concern is the behaviour of the elementary cellular automata with state set 0,1 over the cell set Z/nZ (one-dimensional finite wrap-around case, under all possible update rules (asynchronicity. Over the torus Z/nZ (n<= 11,we will see that the ECA with Wolfram rule 57 maps any v in F_2^n to any w in F_2^n, varying the update rule. We furthermore show that all even (element of the alternating group bijective functions on the set F_2^n = 0,...,2^n-1, can be computed by ECA57, by iterating it a sufficient number of times with varying update rules, at least for n <= 10. We characterize the non-bijective functions computable by asynchronous rules.
16. Threshold effects and cellular recognition. Progress report
Energy Technology Data Exchange (ETDEWEB)
Rando, R R
In the first year we focused on developing the techniques required for the successful incorporation of synthetic glycolipids into cells. To these ends a new water-soluble spacer group (8-amino-3-6-dioxaoctanoic acid) was developed and incorporated into the cholesterol based synthetic glycolipids. These glycolipids could be incorporated into liposomes, rendering them susceptible to aggregation by the appropriate lectin. They also allowed us to define the minimal distance between the sugar moiety and membrane required for agglutination. Finally and most importantly, we were able to functionally incorporate these new glycolipids in cells and render them agglutinable with the appropriate lectins. Functional incorporation does not occur with glycolipids bearing hydropholic spacer groups. We are now in a position to begin using the new glycolipids to answer questions about the roles of cell surface sugars in cellular recognition, which is the subject of this renewal proposal.
17. Anisotropic selection in cellular genetic algorithms
CERN Document Server
Simoncini, David; Collard, Philippe; Clergue, Manuel
In this paper we introduce a new selection scheme in cellular genetic algorithms (cGAs). Anisotropic Selection (AS) promotes diversity and allows accurate control of the selective pressure. First we compare this new scheme with the classical rectangular grid shapes solution according to the selective pressure: we can obtain the same takeover time with the two techniques although the spreading of the best individual is different. We then give experimental results that show to what extent AS promotes the emergence of niches that support low coupling and high cohesion. Finally, using a cGA with anisotropic selection on a Quadratic Assignment Problem we show the existence of an anisotropic optimal value for which the best average performance is observed. Further work will focus on the selective pressure self-adjustment ability provided by this new selection scheme.
18. Cellular Automata Models for Diffusion of Innovations
CERN Document Server
Fuks, H; Fuks, Henryk; Boccara, Nino
We propose a probabilistic cellular automata model for the spread of innovations, rumors, news, etc. in a social system. The local rule used in the model is outertotalistic, and the range of interaction can vary. When the range R of the rule increases, the takeover time for innovation increases and converges toward its mean-field value, which is almost inversely proportional to R when R is large. Exact solutions for R=1 and $R=\\infty$ (mean-field) are presented, as well as simulation results for other values of R. The average local density is found to converge to a certain stationary value, which allows us to obtain a semi-phenomenological solution valid in the vicinity of the fixed point n=1 (for large t).
19. Cellular antioxidant activity of common vegetables. (United States)
Song, Wei; Derito, Christopher M; Liu, M Keshu; He, Xiangjiu; Dong, Mei; Liu, Rui Hai
The measurement of antioxidant activity using biologically relevant assays is important to screen fruits, vegetables, natural products, and dietary supplements for potential health benefits. The cellular antioxidant activity (CAA) assay quantifies antioxidant activity using a cell culture model and was developed to meet the need for a more biologically representative method than the popular chemistry antioxidant capacity measures. The objective of the study was to determine the CAA, total phenolic contents, and oxygen radical absorbance capacity (ORAC) values of 27 vegetables commonly consumed in the United States. Beets, broccoli, and red pepper had the highest CAA values, whereas cucumber had the lowest. CAA values were significantly correlated to total phenolic content. Potatoes were found to be the largest contributors of vegetable phenolics and CAA to the American diet. Increased fruit and vegetable consumption is an effective strategy to increase antioxidant intake and decrease oxidative stress and may lead to reduced risk of developing chronic diseases, such as cancer and cardiovascular disease.
20. Bioceramics for osteogenesis, molecular and cellular advances. (United States)
Demirkiran, Hande
The remarkable need for bone tissue replacement in clinical situations, its limited availability and some major drawbacks of autologous (from the patient) and allogeneic (from a donor) bone grafts are driving researchers to search for alternative approaches for bone repair. In order to develop an appropriate bone substitute, one should understand bone structure and properties and its growth, which will guide researchers to select the optimal conditions for tissue culture and implantation. It's well accepted that bioceramics are excellent candidates as bone replacement with osteogenesis, osteoinduction and osteoconduction capacity. Therefore, the molecular and cellular interactions that take place at the surface of bioceramics and their relevance in osteogenesis excites many researchers to delve deeper into this line of research.
1. Optimal temporal patterns for dynamical cellular signaling (United States)
Hasegawa, Yoshihiko
Cells use temporal dynamical patterns to transmit information via signaling pathways. As optimality with respect to the environment plays a fundamental role in biological systems, organisms have evolved optimal ways to transmit information. Here, we use optimal control theory to obtain the dynamical signal patterns for the optimal transmission of information, in terms of efficiency (low energy) and reliability (low uncertainty). Adopting an activation-deactivation decoding network, we reproduce several dynamical patterns found in actual signals, such as steep, gradual, and overshooting dynamics. Notably, when minimizing the energy of the input signal, the optimal signals exhibit overshooting, which is a biphasic pattern with transient and steady phases; this pattern is prevalent in actual dynamical patterns. We also identify conditions in which these three patterns (steep, gradual, and overshooting) confer advantages. Our study shows that cellular signal transduction is governed by the principle of minimizing free energy dissipation and uncertainty; these constraints serve as selective pressures when designing dynamical signaling patterns.
2. Commercialization of cellular immunotherapies for cancer. (United States)
Walker, Anthony; Johnson, Robert
Successful commercialization of a cell therapy requires more than proving safety and efficacy to the regulators. The inherent complexity of cellular products delivers particular manufacturing, logistical and reimbursement hurdles that threaten commercial viability for any therapy with a less than spectacular clinical profile that truly changes the standard of care. This is particularly acute for autologous cell therapies where patients receive bespoke treatments manufactured from a sample of their own cells and where economies of scale, which play an important role in containing the production costs for small molecule and antibody therapeutics, are highly limited. Nevertheless, the promise of 'game-changing' efficacy, as exemplified by very high levels of complete responses in refractory haematological malignancies, has attracted capital investments on a vast scale, and the attendant pace of technology development provides promising indicators for future clinical and commercial success.
3. Cellular nanotechnology: making biological interfaces smarter. (United States)
Mendes, Paula M
Recently, there has been an outburst of research on engineered cell-material interfaces driven by nanotechnology and its tools and techniques. This tutorial review begins by providing a brief introduction to nanostructured materials, followed by an overview of the wealth of nanoscale fabrication and analysis tools available for their development. This background serves as the basis for a discussion of early breakthroughs and recent key developments in the endeavour to develop nanostructured materials as smart interfaces for fundamental cellular studies, tissue engineering and regenerative medicine. The review covers three major aspects of nanostructured interfaces - nanotopographical control, dynamic behaviour and intracellular manipulation and sensing - where efforts are continuously being made to further understand cell function and provide new ways to control cell behaviour. A critical reflection of the current status and future challenges are discussed as a conclusion to the review.
4. Call Admission Control in Mobile Cellular Networks
CERN Document Server
Ghosh, Sanchita
5. Microfluidic electroporation for cellular analysis and delivery. (United States)
Geng, Tao; Lu, Chang
Electroporation is a simple yet powerful technique for breaching the cell membrane barrier. The applications of electroporation can be generally divided into two categories: the release of intracellular proteins, nucleic acids and other metabolites for analysis and the delivery of exogenous reagents such as genes, drugs and nanoparticles with therapeutic purposes or for cellular manipulation. In this review, we go over the basic physics associated with cell electroporation and highlight recent technological advances on microfluidic platforms for conducting electroporation. Within the context of its working mechanism, we summarize the accumulated knowledge on how the parameters of electroporation affect its performance for various tasks. We discuss various strategies and designs for conducting electroporation at the microscale and then focus on analysis of intracellular contents and delivery of exogenous agents as two major applications of the technique. Finally, an outlook for future applications of microfluidic electroporation in increasingly diverse utilities is presented.
6. Wireless traffic steering for green cellular networks
CERN Document Server
Zhang, Shan; Zhou, Sheng; Niu, Zhisheng; Shen, Xuemin (Sherman)
This book introduces wireless traffic steering as a paradigm to realize green communication in multi-tier heterogeneous cellular networks. By matching network resources and dynamic mobile traffic demand, traffic steering helps to reduce on-grid power consumption with on-demand services provided. This book reviews existing solutions from the perspectives of energy consumption reduction and renewable energy harvesting. Specifically, it explains how traffic steering can improve energy efficiency through intelligent traffic-resource matching. Several promising traffic steering approaches for dynamic network planning and renewable energy demand-supply balancing are discussed. This book presents an energy-aware traffic steering method for networks with energy harvesting, which optimizes the traffic allocated to each cell based on the renewable energy status. Renewable energy demand-supply balancing is a key factor in energy dynamics, aimed at enhancing renewable energy sustainability to reduce on-grid energy consum...
7. Cellular signaling by fibroblast growth factor receptors. (United States)
Eswarakumar, V P; Lax, I; Schlessinger, J
The 22 members of the fibroblast growth factor (FGF) family of growth factors mediate their cellular responses by binding to and activating the different isoforms encoded by the four receptor tyrosine kinases (RTKs) designated FGFR1, FGFR2, FGFR3 and FGFR4. Unlike other growth factors, FGFs act in concert with heparin or heparan sulfate proteoglycan (HSPG) to activate FGFRs and to induce the pleiotropic responses that lead to the variety of cellular responses induced by this large family of growth factors. A variety of human skeletal dysplasias have been linked to specific point mutations in FGFR1, FGFR2 and FGFR3 leading to severe impairment in cranial, digital and skeletal development. Gain of function mutations in FGFRs were also identified in a variety of human cancers such as myeloproliferative syndromes, lymphomas, prostate and breast cancers as well as other malignant diseases. The binding of FGF and HSPG to the extracellular ligand domain of FGFR induces receptor dimerization, activation and autophosphorylation of multiple tyrosine residues in the cytoplasmic domain of the receptor molecule. A variety of signaling proteins are phosphorylated in response to FGF stimulation including Shc, phospholipase-Cgamma, STAT1, Gab1 and FRS2alpha leading to stimulation of intracellular signaling pathways that control cell proliferation, cell differentiation, cell migration, cell survival and cell shape. The docking proteins FRS2alpha and FRS2beta are major mediators of the Ras/MAPK and PI-3 kinase/Akt signaling pathways as well as negative feedback mechanisms that fine-tune the signal that is initiated at the cell surface following FGFR stimulation.
8. Statistical physical models of cellular motility (United States)
Banigan, Edward J.
Cellular motility is required for a wide range of biological behaviors and functions, and the topic poses a number of interesting physical questions. In this work, we construct and analyze models of various aspects of cellular motility using tools and ideas from statistical physics. We begin with a Brownian dynamics model for actin-polymerization-driven motility, which is responsible for cell crawling and "rocketing" motility of pathogens. Within this model, we explore the robustness of self-diffusiophoresis, which is a general mechanism of motility. Using this mechanism, an object such as a cell catalyzes a reaction that generates a steady-state concentration gradient that propels the object in a particular direction. We then apply these ideas to a model for depolymerization-driven motility during bacterial chromosome segregation. We find that depolymerization and protein-protein binding interactions alone are sufficient to robustly pull a chromosome, even against large loads. Next, we investigate how forces and kinetics interact during eukaryotic mitosis with a many-microtubule model. Microtubules exert forces on chromosomes, but since individual microtubules grow and shrink in a force-dependent way, these forces lead to bistable collective microtubule dynamics, which provides a mechanism for chromosome oscillations and microtubule-based tension sensing. Finally, we explore kinematic aspects of cell motility in the context of the immune system. We develop quantitative methods for analyzing cell migration statistics collected during imaging experiments. We find that during chronic infection in the brain, T cells run and pause stochastically, following the statistics of a generalized Levy walk. These statistics may contribute to immune function by mimicking an evolutionarily conserved efficient search strategy. Additionally, we find that naive T cells migrating in lymph nodes also obey non-Gaussian statistics. Altogether, our work demonstrates how physical
9. Gene-expression signatures of Atlantic salmon's plastic life cycle (United States)
Aubin-Horth, N.; Letcher, B.H.; Hofmann, H.A.
How genomic expression differs as a function of life history variation is largely unknown. Atlantic salmon exhibits extreme alternative life histories. We defined the gene-expression signatures of wild-caught salmon at two different life stages by comparing the brain expression profiles of mature sneaker males and immature males, and early migrants and late migrants. In addition to life-stage-specific signatures, we discovered a surprisingly large gene set that was differentially regulated-at similar magnitudes, yet in opposite direction-in both life history transitions. We suggest that this co-variation is not a consequence of many independent cellular and molecular switches in the same direction but rather represents the molecular equivalent of a physiological shift orchestrated by one or very few master regulators. ?? 2009 Elsevier Inc. All rights reserved.
10. Hormesis, cellular stress response and vitagenes as critical determinants in aging and longevity. (United States)
Calabrese, Vittorio; Cornelius, Carolin; Cuzzocrea, Salvatore; Iavicoli, Ivo; Rizzarelli, Enrico; Calabrese, Edward J
Understanding mechanisms of aging and determinants of life span will help to reduce age-related morbidity and facilitate healthy aging. Average lifespan has increased over the last centuries, as a consequence of medical and environmental factors, but maximal life span remains unchanged. Extension of maximal life span is currently possible in animal models with measures such as genetic manipulations and caloric restriction (CR). CR appears to prolong life by reducing reactive oxygen species (ROS)-mediated oxidative damage. But ROS formation, which is positively implicated in cellular stress response mechanisms, is a highly regulated process controlled by a complex network of intracellular signaling pathways. By sensing the intracellular nutrient and energy status, the functional state of mitochondria, and the concentration of ROS produced in mitochondria, the longevity network regulates life span across species by co-ordinating information flow along its convergent, divergent and multiply branched signaling pathways, including vitagenes which are genes involved in preserving cellular homeostasis during stressful conditions. Vitagenes encode for heat shock proteins (Hsp) Hsp32, Hsp70, the thioredoxin and the sirtuin protein systems. Dietary antioxidants, such as carnosine, carnitines or polyphenols, have recently been demonstrated to be neuroprotective through the activation of hormetic pathways, including vitagenes. The hormetic dose-response, challenges long-standing beliefs about the nature of the dose-response in a lowdose zone, having the potential to affect significantly the design of pre-clinical studies and clinical trials as well as strategies for optimal patient dosing in the treatment of numerous diseases. Given the broad cytoprotective properties of the heat shock response there is now strong interest in discovering and developing pharmacological agents capable of inducing stress responses. In this review we discuss the most current and up to date
11. Aging of the inceptive cellular population: the relationship between stem cells and aging. (United States)
Symonds, Catherine E; Galderisi, Umberto; Giordano, Antonio
The average life expectancy worldwide has about doubled and the global population has increased six fold over the past century. With improving health care in the developed world there is a proportional augmentation in the treatment necessary for elderly patients occasioning the call for increased research in the area of aging and age-related diseases. The manifestation of this research has been focalized on the causative cellular processes and molecular mechanisms involved. Here we will discuss the efforts of this research in the area of stem cells, delving into the regulatory mechanisms and how their de-regulation could be attributed to aging and age-related diseases.
12. Cytosolic iron-sulfur cluster assembly (CIA) system: factors, mechanism, and relevance to cellular iron regulation. (United States)
Sharma, Anil K; Pallesen, Leif J; Spang, Robert J; Walden, William E
FeS cluster biogenesis is an essential process in virtually all forms of life. Complex protein machineries that are conserved from bacteria through higher eukaryotes facilitate assembly of the FeS cofactor in proteins. In the last several years, significant strides have been made in our understanding of FeS cluster assembly and the functional overlap of this process with cellular iron homeostasis. This minireview summarizes the present understanding of the cytosolic iron-sulfur cluster assembly (CIA) system in eukaryotes, with a focus on information gained from studies in budding yeast and mammalian systems.
13. Cellular resolution models for even skipped regulation in the entire Drosophila embryo
Ilsley, Garth R; Fisher, Jasmin; Apweiler, Rolf; Angela H. DePace; Nicholas M Luscombe
eLife digest The transcription of genes into messenger RNA (mRNA) molecules is one of the most important processes in biology, but our present understanding of this process is largely qualitative. Molecules such as transcription factors and regions of DNA other than the region that codes for the mRNA are known to interact with each other to influence the onset of transcription, and also the rate at which it occurs. However, given the cellular concentrations of transcription factors in a devel...
14. Controlling Cellular Endocytosis at the Nanoscale (United States)
Battaglia, Giuseppe
One of the most challenging aspects of drug delivery is the intra-cellular delivery of active agents. Several drugs and especially nucleic acids all need to be delivered within the cell interior to exert their therapeutic action. Small hydrophobic molecules can permeate cell membranes with relative ease, but hydrophilic molecules and especially large macromolecules such as proteins and nucleic acids require a vector to assist their transport across the cell membrane. This must be designed so as to ensure intracellular delivery without compromising cell viability. We have recently achieved this by using pH-sensitive poly(2-(methacryloyloxy)ethyl-phosphorylcholine)- co -poly(2-(diisopropylamino)ethyl methacrylate) (PMPC-PDPA) and poly(ethylene oxide)-co- poly(2-(diisopropylamino)ethyl methacrylate) (PEO-PDPA) diblock copolymers that self-assemble to form vesicles in aqueous solution. These vesicles combine a non-fouling PMPC or PEO block with a pH-sensitive PDPA block and have the ability to encapsulate both hydrophobic molecules within the vesicular membrane and hydrophilic molecules within their aqueous cores. The pH sensitive nature of the PDPA blocks make the diblock copolymers forming stable vesicles at physiological pH but that rapid dissociation of these vesicles occurs between pH 5 and pH 6 to form molecularly dissolved copolymer chains (unimers). We used these vesicles to encapsulate small and large macromolecules and these were successfully delivered intracellularly including nucleic acid, drugs, quantum dots, and antibodies. Dynamic light scattering, zeta potential measurements, and transmission electron microscopy were used to study and optimise the encapsulation processes. Confocal laser scanning microscopy, fluorescence flow cytometry and lysates analysis were used to quantify cellular uptake and to study the kinetics of this process in vitro and in vivo. We show the effective cytosolic delivery of nucleic acids, proteins, hydrophobic molecules
15. 1,4-Naphthoquinones: From Oxidative Damage to Cellular and Inter-Cellular Signaling
Directory of Open Access Journals (Sweden)
Lars-Oliver Klotz
Full Text Available Naphthoquinones may cause oxidative stress in exposed cells and, therefore, affect redox signaling. Here, contributions of redox cycling and alkylating properties of quinones (both natural and synthetic, such as plumbagin, juglone, lawsone, menadione, methoxy-naphthoquinones, and others to cellular and inter-cellular signaling processes are discussed: (i naphthoquinone-induced Nrf2-dependent modulation of gene expression and its potentially beneficial outcome; (ii the modulation of receptor tyrosine kinases, such as the epidermal growth factor receptor by naphthoquinones, resulting in altered gap junctional intercellular communication. Generation of reactive oxygen species and modulation of redox signaling are properties of naphthoquinones that render them interesting leads for the development of novel compounds of potential use in various therapeutic settings.
16. Cellular basis of gravity resistance in plants (United States)
Hoson, Takayuki; Matsumoto, Shouhei; Inui, Kenichi; Zhang, Yan; Soga, Kouichi; Wakabayashi, Kazuyuki; Hashimoto, Takashi
Mechanical resistance to the gravitational force is a principal gravity response in plants distinct from gravitropism. In the final step of gravity resistance, plants increase the rigidity of their cell walls via modifications to the cell wall metabolism and apoplastic environment. We studied cellular events that are related to the cell wall changes under hypergravity conditions produced by centrifugation. Hypergravity induced reorientation of cortical microtubules from transverse to longitudinal directions in epidermal cells of stem organs. In Arabidopsis tubulin mutants, the percentage of cells with longitudinal microtubules was high even at 1 g, and it was further increased by hypergravity. Hypocotyls of tubulin mutants also showed either left-handed or right-handed helical growth at 1 g, and the degree of twisting phenotype was intensified under hypergravity conditions. The left-handed helical growth mutants had right-handed microtubule arrays, whereas the right-handed mutant had left-handed arrays. There was a close correlation between the alignment angle of epidermal cell files and the alignment of cortical microtubules. Gadolinium ions suppressed both the twisting phenotype and reorientation of microtubules in tubulin mutants. These results support the hypothesis that cortical microtubules play an es-sential role in maintenance of normal growth phenotype against the gravitational force, and suggest that mechanoreceptors are involved in modifications to morphology and orientation of microtubule arrays by hypergravity. Actin microfilaments, in addition to microtubules, may be involved in gravity resistance. The nucleus of epidermal cells of azuki bean epicotyls, which is present almost in the center of the cell at 1 g, was displaced to the cell bottom by increasing the magnitude of gravity. Cytochalasin D stimulated the sedimentation by hypergravity of the nu-cleus, suggesting that the positioning of the nucleus is regulated by actin microfilaments, which is
17. Cellular immune responses towards regulatory cells. (United States)
Larsen, Stine Kiær
This thesis describes the results from two published papers identifying spontaneous cellular immune responses against the transcription factors Foxp3 and Foxo3. The tumor microenvironment is infiltrated by cells that hinder effective tumor immunity from developing. Two of these cell types, which have been linked to a bad prognosis for patients, are regulatory T cells (Treg) and tolerogenic dendritic cells (DC). Tregs inhibit effector T cells from attacking the tumor through various mechanisms, including secreted factors and cell-to-cell contact. Tregs express the transcription factor Foxp3, which is necessary for their development and suppressive activities. Tolerogenic DCs participate in creating an environment in the tumor where effector T cells become tolerant towards the tumor instead of attacking it. The transcription factor Foxo3 was recently described to be highly expressed by tolerogenic DCs and to programme their tolerogenic influence. This thesis describes for the first time the existence of spontaneous cellular immune responses against peptides derived from Foxp3 and Foxo3. We have detected the presence of cytotoxic T cells that recognise these peptides in an HLA-A2 restricted manner in cancer patients and for Foxp3 in healthy donors as well. In addition, we have demonstrated that the Foxp3- and Foxo3-specific CTLs recognize Foxp3- and Foxo3-expressing cancer cell lines and importantly, suppressive immune cells, namely Tregs and in vitro generated DCs. Cancer immunotherapy is recently emerging as an important treatment modality improving the survival of selected patients. The current progress is largely owing to targeting of the immune suppressive milieu that is dominating the tumor microenvironment. This is being done through immune checkpoint blockade with CTLA-4 and PD-1/PD-L1 antibodies and through lymphodepleting conditioning of patients and ex vivo activation of TILs in adoptive cell transfer. Several strategies are being explored for depletion of
18. Cellular membrane collapse by atmospheric-pressure plasma jet (United States)
Kim, Kangil; Jun Ahn, Hak; Lee, Jae-Hyeok; Kim, Jae-Ho; Sik Yang, Sang; Lee, Jong-Soo
Cellular membrane dysfunction caused by air plasma in cancer cells has been studied to exploit atmospheric-pressure plasma jets for cancer therapy. Here, we report that plasma jet treatment of cervical cancer HeLa cells increased electrical conductivity across the cellular lipid membrane and caused simultaneous lipid oxidation and cellular membrane collapse. We made this finding by employing a self-manufactured microelectrode chip. Furthermore, increased roughness of the cellular lipid membrane and sequential collapse of the membrane were observed by atomic force microscopy following plasma jet treatment. These results suggest that the cellular membrane catastrophe occurs via coincident altered electrical conductivity, lipid oxidation, and membrane roughening caused by an atmospheric-pressure plasma jet, possibly resulting in cellular vulnerability to reactive species generated from the plasma as well as cytotoxicity to cancer cells.
19. Cellular membrane collapse by atmospheric-pressure plasma jet
Energy Technology Data Exchange (ETDEWEB)
Kim, Kangil; Sik Yang, Sang, E-mail:, E-mail: [Department of Electrical and Computer Engineering, Ajou University, Suwon 443-749 (Korea, Republic of); Jun Ahn, Hak; Lee, Jong-Soo, E-mail:, E-mail: [Department of Biological Sciences, Ajou University, Suwon 443-749 (Korea, Republic of); Lee, Jae-Hyeok; Kim, Jae-Ho [Department of Molecular Science and Technology, Ajou University, Suwon 443-749 (Korea, Republic of)
20. Power Control in Multi-Layer Cellular Networks
CERN Document Server
Davaslioglu, Kemal
We investigate the possible performance gains of power control in multi-layer cellular systems where microcells and picocells are distributed within macrocells. Although multilayers in cellular networks help increase system capacity and coverage, and can reduce total energy consumption; they cause interference, reducing the performance of the network. Therefore, downlink transmit power levels of multi-layer hierarchical cellular networks need to be controlled in order to fully exploit their benefits. In this work, we present an analytical derivation to determine optimum power levels for two-layer cellular networks and generalize our solution to multi-layer cellular networks. We also simulate our results in a typical multi-layer network setup and observe significant power savings compared to single-layer cellular networks.
1. Green Cellular Networks: A Survey, Some Research Issues and Challenges
CERN Document Server
Hasan, Ziaul; Bhargava, Vijay K
Energy efficiency in cellular networks is a growing concern for cellular operators to not only maintain profitability, but also to reduce the overall environment effects. This emerging trend of achieving energy efficiency in cellular networks is motivating the standardization authorities and network operators to continuously explore future technologies in order to bring improvements in the entire network infrastructure. In this article, we present a brief survey of methods to improve the power efficiency of cellular networks, explore some research issues and challenges and suggest some techniques to enable an energy efficient or "green" cellular network. Since base stations consume a maximum portion of the total energy used in a cellular system, we will first provide a comprehensive survey on techniques to obtain energy savings in base stations. Next, we discuss how heterogenous network deployment based on micro, pico and femto-cells can be used to achieve this goal. Since cognitive radio and cooperative rela...
2. Numerical investigation on evolution of cylindrical cellular detonation
Institute of Scientific and Technical Information of China (English)
WANG Chun; JIANG Zong-lin; HU Zong-min; HAN Gui-lai
Cylindrical cellular detonation is numerically investigated by solving twodimensional reactive Euler equations with a finite volume method on a two-dimensional self-adaptive unstructured mesh.The one-step reversible chemical reaction model is applied to simplify the control parameters of chemical reaction.Numerical results demonstrate the evolution of cellular cell splitting of cylindrical cellular detonation explored in experimentas.Split of cellular structures shows different features in the near-field and far-field from the initiation zone.Variation of the local curvature is a key factor in the behavior of cell split of cylindrical cellular detonation in propagation.Numerical results show that split of cellular structures comes from the self-organization of transverse waves corresponding to the development of small disturbances along the detonation front related to detonation instability.
3. Vasculogenesis and Its Cellular Therapeutic Applications. (United States)
Ratajska, Anna; Jankowska-Steifer, Ewa; Czarnowska, Elżbieta; Olkowski, Radosław; Gula, Grzegorz; Niderla-Bielińska, Justyna; Flaht-Zabost, Aleksandra; Jasińska, Agnieszka
Vasculogenesis was originally defined by Risau in 1997 [Nature 386: 671-674] as the de novo formation of vessels from endothelial progenitor cells (EPCs), so-called angioblasts. Initially, this process was believed to be related only to embryonic life; however, further studies reported vasculogenesis to occur also in adult tissues. This overview presents the current knowledge about the origin, differentiation and significance of EPCs that have been observed in various diseases, tumors, and reparative processes. We also summarize the knowledge of how to activate these cells for therapeutic purposes and the outcomes of the therapies.
4. Iron Oxide Nanoparticles Stimulates Extra-Cellular Matrix Production in Cellular Spheroids
Directory of Open Access Journals (Sweden)
Megan Casco
Full Text Available Nanotechnologies have been integrated into drug delivery, and non-invasive imaging applications, into nanostructured scaffolds for the manipulation of cells. The objective of this work was to determine how the physico-chemical properties of magnetic nanoparticles (MNPs and their spatial distribution into cellular spheroids stimulated cells to produce an extracellular matrix (ECM. The MNP concentration (0.03 mg/mL, 0.1 mg/mL and 0.3 mg/mL, type (magnetoferritin, shape (nanorod—85 nm × 425 nm and incorporation method were studied to determine each of their effects on the specific stimulation of four ECM proteins (collagen I, collagen IV, elastin and fibronectin in primary rat aortic smooth muscle cell. Results demonstrated that as MNP concentration increased there was up to a 6.32-fold increase in collagen production over no MNP samples. Semi-quantitative Immunohistochemistry (IHC results demonstrated that MNP type had the greatest influence on elastin production with a 56.28% positive area stain compared to controls and MNP shape favored elastin stimulation with a 50.19% positive area stain. Finally, there are no adverse effects of MNPs on cellular contractile ability. This study provides insight on the stimulation of ECM production in cells and tissues, which is important because it plays a critical role in regulating cellular functions.
5. Life Before Earth
CERN Document Server
Sharov, Alexei A
6. Life is determined by its environment (United States)
Torday, John S.; Miller, William B.
A well-developed theory of evolutionary biology requires understanding of the origins of life on Earth. However, the initial conditions (ontology) and causal (epistemology) bases on which physiology proceeded have more recently been called into question, given the teleologic nature of Darwinian evolutionary thinking. When evolutionary development is focused on cellular communication, a distinctly different perspective unfolds. The cellular communicative-molecular approach affords a logical progression for the evolutionary narrative based on the basic physiologic properties of the cell. Critical to this appraisal is recognition of the cell as a fundamental reiterative unit of reciprocating communication that receives information from and reacts to epiphenomena to solve problems. Following the course of vertebrate physiology from its unicellular origins instead of its overt phenotypic appearances and functional associations provides a robust, predictive picture for the means by which complex physiology evolved from unicellular organisms. With this foreknowledge of physiologic principles, we can determine the fundamentals of Physiology based on cellular first principles using a logical, predictable method. Thus, evolutionary creativity on our planet can be viewed as a paradoxical product of boundary conditions that permit homeostatic moments of varying length and amplitude that can productively absorb a variety of epigenetic impacts to meet environmental challenges.
7. Early Life Exposures and Cancer (United States)
Early-life events and exposures have important consequences for cancer development later in life, however, epidemiological studies of early-life factors and cancer development later in life have had significant methodological challenges.
8. Some Properties of Fractals Generated by Linear Cellular Automata
Institute of Scientific and Technical Information of China (English)
Fractals and cellular automata are both significant areas of research in nonlinear analysis. This paper studies a class of fractals generated by cellular automata. The patterns produced by cellular automata give a special sequence of sets in Euclidean space. The corresponding limit set is shown to be a fractal and the dimension is independent of the choice of the finite initial seed. As opposed to previous works, the fractals here do not depend on the time parameter.
9. Real-Time Bioluminescent Tracking of Cellular Population Dynamics
Close, Dan; Xu, Tingling; Ripp, Steven; Sayler, Gary
Cellular population dynamics are routinely monitored across many diverse fields for a variety of purposes. In general, these dynamics are assayed either through the direct counting of cellular aliquots followed by extrapolation to the total population size, or through the monitoring of signal intensity from any number of externally stimulated reporter proteins. While both viable methods, here we describe a novel technique that allows for the automated, non-destructive tracking of cellular pop...
10. Cell biology of the future: Nanometer-scale cellular cartography. (United States)
Taraska, Justin W
Understanding cellular structure is key to understanding cellular regulation. New developments in super-resolution fluorescence imaging, electron microscopy, and quantitative image analysis methods are now providing some of the first three-dimensional dynamic maps of biomolecules at the nanometer scale. These new maps--comprehensive nanometer-scale cellular cartographies--will reveal how the molecular organization of cells influences their diverse and changeable activities.
11. Life insurance mathematics
CERN Document Server
Gerber, Hans U
This concise introduction to life contingencies, the theory behind the actuarial work around life insurance and pension funds, will appeal to the reader who likes applied mathematics. In addition to model of life contingencies, the theory of compound interest is explained and it is shown how mortality and other rates can be estimated from observations. The probabilistic model is used consistently throughout the book. Numerous exercises (with answers and solutions) have been added, and for this third edition several misprints have been corrected.
12. Artificial life and Piaget. (United States)
Mueller, Ulrich; Grobman, K H.
Artificial life provides important theoretical and methodological tools for the investigation of Piaget's developmental theory. This new method uses artificial neural networks to simulate living phenomena in a computer. A recent study by Parisi and Schlesinger suggests that artificial life might reinvigorate the Piagetian framework. We contrast artificial life with traditional cognitivist approaches, discuss the role of innateness in development, and examine the relation between physiological and psychological explanations of intelligent behaviour.
13. Cellular recurrent deep network for image registration (United States)
Alam, M.; Vidyaratne, L.; Iftekharuddin, Khan M.
Image registration using Artificial Neural Network (ANN) remains a challenging learning task. Registration can be posed as a two-step problem: parameter estimation and actual alignment/transformation using the estimated parameters. To date ANN based image registration techniques only perform the parameter estimation, while affine equations are used to perform the actual transformation. In this paper, we propose a novel deep ANN based image rigid registration that combines parameter estimation and transformation as a simultaneous learning task. Our previous work shows that a complex universal approximator known as Cellular Simultaneous Recurrent Network (CSRN) can successfully approximate affine transformations with known transformation parameters. This study introduces a deep ANN that combines a feed forward network with a CSRN to perform full rigid registration. Layer wise training is used to pre-train feed forward network for parameter estimation and followed by a CSRN for image transformation respectively. The deep network is then fine-tuned to perform the final registration task. Our result shows that the proposed deep ANN architecture achieves comparable registration accuracy to that of image affine transformation using CSRN with known parameters. We also demonstrate the efficacy of our novel deep architecture by a performance comparison with a deep clustered MLP.
14. Cellular-level surgery using nano robots. (United States)
Song, Bo; Yang, Ruiguo; Xi, Ning; Patterson, Kevin Charles; Qu, Chengeng; Lai, King Wai Chiu
The atomic force microscope (AFM) is a popular instrument for studying the nano world. AFM is naturally suitable for imaging living samples and measuring mechanical properties. In this article, we propose a new concept of an AFM-based nano robot that can be applied for cellular-level surgery on living samples. The nano robot has multiple functions of imaging, manipulation, characterizing mechanical properties, and tracking. In addition, the technique of tip functionalization allows the nano robot the ability for precisely delivering a drug locally. Therefore, the nano robot can be used for conducting complicated nano surgery on living samples, such as cells and bacteria. Moreover, to provide a user-friendly interface, the software in this nano robot provides a "videolized" visual feedback for monitoring the dynamic changes on the sample surface. Both the operation of nano surgery and observation of the surgery results can be simultaneously achieved. This nano robot can be easily integrated with extra modules that have the potential applications of characterizing other properties of samples such as local conductance and capacitance.
15. Extra cellular matrix features in human meninges. (United States)
Montagnani, S; Castaldo, C; Di Meglio, F; Sciorio, S; Giordano-Lanza, G
We collected human fetal and adult normal meninges to relate the age of the tissue with the presence of collagenous and non-collagenous components of Extra Cellular Matrix (ECM). Immunohistochemistry led us to observe some differences in the amount and in the distribution of these proteins between the two sets of specimens. In particular, laminin and tenascin seem to be expressed more intensely in fetal meninges when compared to adult ones. In order to investigate whether the morphofunctional characteristics of fetal meninges may be represented in pathological conditions we also studied meningeal specimens from human meningiomas. Our attention was particularly focused on the expression of those non-collagenous proteins involved in nervous cell migration and neuronal morphogenesis as laminin and tenascin, which were present in lesser amount in normal adult specimens. Microscopical evidences led us to hipothesize that these proteins which are synthesized in a good amount during the fetal development of meninges can be newly produced in tumors. On the contrary, the role of tenascin and laminin in adult meninges is probably only interesting for their biophysical characteristics.
16. Endothelial Cellular Responses to Biodegradable Metal Zinc. (United States)
Ma, Jun; Zhao, Nan; Zhu, Donghui
Biodegradable zinc (Zn) metals, a new generation of biomaterials, have attracted much attention due to their excellent biodegradability, bioabsorbability, and adaptability to tissue regeneration. Compared with magnesium (Mg) and iron (Fe), Zn exhibits better corrosion and mechanical behaviors in orthopedic and stent applications. After implantation, Zn containing material will slowly degrade, and Zn ions (Zn(2+)) will be released to the surrounding tissue. For stent applications, the local Zn(2+)concentration near endothelial tissue/cells could be high. However, it is unclear how endothelia will respond to such high concentrations of Zn(2+), which is pivotal to vascular remodeling and regeneration. Here, we evaluated the short-term cellular behaviors of primary human coronary artery endothelial cells (HCECs) exposed to a concentration gradient (0-140 μM) of extracellular Zn(2+). Zn(2+) had an interesting biphasic effect on cell viability, proliferation, spreading, and migration. Generally, low concentrations of Zn(2+) promoted viability, proliferation, adhesion, and migration, while high concentrations of Zn(2+) had opposite effects. For gene expression profiles, the most affected functional genes were related to cell adhesion, cell injury, cell growth, angiogenesis, inflammation, vessel tone, and coagulation. These results provide helpful information and guidance for Zn-based alloy design as well as the controlled release of Zn(2+)in stent and other related medical applications.
17. Cellular Automata Model for Elastic Solid Material
Institute of Scientific and Technical Information of China (English)
DONG Yin-Feng; ZHANG Guang-Cai; XU Ai-Guo; GAN Yan-Biao
The Cellular Automaton (CA) modeling and simulation of solid dynamics is a long-standing difficult problem.In this paper we present a new two-dimensional CA model for solid dynamics.In this model the solid body is represented by a set of white and black particles alternatively positioned in the x-and y-directions.The force acting on each particle is represented by the linear summation of relative displacements of the nearest-neighboring particles.The key technique in this new model is the construction of eight coefficient matrices.Theoretical and numerical analyses show that the present model can be mathematically described by a conservative system.So,it works for elastic material.In the continuum limit the CA model recovers the well-known Navier equation.The coefficient matrices are related to the shear module and Poisson ratio of the material body.Compared with previous CA model for solid body,this model realizes the natural coupling of deformations in the x-and y-directions.Consequently,the wave phenomena related to the Poisson ratio effects are successfully recovered.This work advances significantly the CA modeling and simulation in the field of computational solid dynamics.
18. Analytical Modeling of Uplink Cellular Networks
CERN Document Server
Novlan, Thomas D; Andrews, Jeffrey G
Cellular uplink analysis has typically been undertaken by either a simple approach that lumps all interference into a single deterministic or random parameter in a Wyner-type model, or via complex system level simulations that often do not provide insight into why various trends are observed. This paper proposes a novel middle way that is both accurate and also results in easy-to-evaluate integral expressions based on the Laplace transform of the interference. We assume mobiles and base stations are randomly placed in the network with each mobile pairing up to its closest base station. The model requires two important changes compared to related recent work on the downlink. First, dependence is introduced between the user and base station point processes to make sure each base station serves a single mobile in the given resource block. Second, per-mobile power control is included, which further couples the locations of the mobiles and their receiving base stations. Nevertheless, we succeed in deriving the cov...
19. Biophysical Tools to Study Cellular Mechanotransduction
Directory of Open Access Journals (Sweden)
Ismaeel Muhamed
Full Text Available The cell membrane is the interface that volumetrically isolates cellular components from the cell’s environment. Proteins embedded within and on the membrane have varied biological functions: reception of external biochemical signals, as membrane channels, amplification and regulation of chemical signals through secondary messenger molecules, controlled exocytosis, endocytosis, phagocytosis, organized recruitment and sequestration of cytosolic complex proteins, cell division processes, organization of the cytoskeleton and more. The membrane’s bioelectrical role is enabled by the physiologically controlled release and accumulation of electrochemical potential modulating molecules across the membrane through specialized ion channels (e.g., Na+, Ca2+, K+ channels. The membrane’s biomechanical functions include sensing external forces and/or the rigidity of the external environment through force transmission, specific conformational changes and/or signaling through mechanoreceptors (e.g., platelet endothelial cell adhesion molecule (PECAM, vascular endothelial (VE-cadherin, epithelial (E-cadherin, integrin embedded in the membrane. Certain mechanical stimulations through specific receptor complexes induce electrical and/or chemical impulses in cells and propagate across cells and tissues. These biomechanical sensory and biochemical responses have profound implications in normal physiology and disease. Here, we discuss the tools that facilitate the understanding of mechanosensitive adhesion receptors. This article is structured to provide a broad biochemical and mechanobiology background to introduce a freshman mechano-biologist to the field of mechanotransduction, with deeper study enabled by many of the references cited herein.
20. On the topological sensitivity of cellular automata (United States)
Baetens, Jan M.; De Baets, Bernard
Ever since the conceptualization of cellular automata (CA), much attention has been paid to the dynamical properties of these discrete dynamical systems, and, more in particular, to their sensitivity to the initial condition from which they are evolved. Yet, the sensitivity of CA to the topology upon which they are based has received only minor attention, such that a clear insight in this dependence is still lacking and, furthermore, a quantification of this so-called topological sensitivity has not yet been proposed. The lack of attention for this issue is rather surprising since CA are spatially explicit, which means that their dynamics is directly affected by their topology. To overcome these shortcomings, we propose topological Lyapunov exponents that measure the divergence of two close trajectories in phase space originating from a topological perturbation, and we relate them to a measure grasping the sensitivity of CA to their topology that relies on the concept of topological derivatives, which is introduced in this paper. The validity of the proposed methodology is illustrated for the 256 elementary CA and for a family of two-state irregular totalistic CA.
1. Biological (molecular and cellular) markers of toxicity
Energy Technology Data Exchange (ETDEWEB)
Shugart, L.R.
The overall objective of this study is to evaluate the use of the small aquarium fish, Japanese Medaka (Oryzias latipes), as a predictor of potential genotoxicity following exposure to carcinogens. This will be accomplished by quantitatively investigating the early molecular events associated with genotoxicity of various tissues of Medaka subsequent to exposure of the organism to several known carcinogens, such as diethylnitrosamine (DEN) and benzo(a)pyrene (BaP). Because of the often long latent period between initial contact with certain chemical and physical agents in our environment and subsequent expression of deleterious health or ecological impact, the development of sensitive methods for detecting and estimating early exposure is needed so that necessary interventions can ensue. A promising biological endpoint for detecting early exposure to damaging chemicals is the interaction of these compounds with cellular macromolecules such as Deoxyribonucleic acids (DNA). This biological endpoint assumes significance because it can be one of the critical early events leading eventually to adverse effects (neoplasia) in the exposed organism.
2. Optimal flux patterns in cellular metabolic networks
Energy Technology Data Exchange (ETDEWEB)
Almaas, E
The availability of whole-cell level metabolic networks of high quality has made it possible to develop a predictive understanding of bacterial metabolism. Using the optimization framework of flux balance analysis, I investigate metabolic response and activity patterns to variations in the availability of nutrient and chemical factors such as oxygen and ammonia by simulating 30,000 random cellular environments. The distribution of reaction fluxes is heavy-tailed for the bacteria H. pylori and E. coli, and the eukaryote S. cerevisiae. While the majority of flux balance investigations have relied on implementations of the simplex method, it is necessary to use interior-point optimization algorithms to adequately characterize the full range of activity patterns on metabolic networks. The interior-point activity pattern is bimodal for E. coli and S. cerevisiae, suggesting that most metabolic reaction are either in frequent use or are rarely active. The trimodal activity pattern of H. pylori indicates that a group of its metabolic reactions (20%) are active in approximately half of the simulated environments. Constructing the high-flux backbone of the network for every environment, there is a clear trend that the more frequently a reaction is active, the more likely it is a part of the backbone. Finally, I briefly discuss the predicted activity patterns of the central-carbon metabolic pathways for the sample of random environments.
3. Cellular effector mechanisms against Plasmodium liver stages. (United States)
Frevert, Ute; Nardin, Elizabeth
Advances in our understanding of the molecular and cell biology of the malaria parasite have led to new vaccine development efforts resulting in a pipeline of over 40 candidates undergoing clinical phase I-III trials. Vaccine-induced CD4+ and CD8+ T cells specific for pre-erythrocytic stage antigens have been found to express cytolytic and multi-cytokine effector functions that support a key role for these T cells within the hepatic environment. However, little is known of the cellular interactions that occur during the effector phase in which the intracellular hepatic stage of the parasite is targeted and destroyed. This review focuses on cell biological aspects of the interaction between malaria-specific effector cells and the various antigen-presenting cells that are known to exist within the liver, including hepatocytes, dendritic cells, Kupffer cells, stellate cells and sinusoidal endothelia. Considering the unique immune properties of the liver, it is conceivable that these different hepatic antigen-presenting cells fulfil distinct but complementary roles during the effector phase against Plasmodium liver stages.
4. [Cellular structure of propionibacteria during their multiplication]. (United States)
Sobczak, E; Kocoń, J
The aim of the present study was to determine the structure of bacterial cells from Propionibacterium genus as well as their structure during the cellular division. On the basis of the observations made in the electron transmission microscope, in uranyl-acetates-tained preparations of ultra-thin specimens of bacteria, it was stated that propionic bacteria appeared in a shape of short rods, possessing regular profiles of cell walls as opposed to Gram-negative bacteria with a very creased edge line. Besides, it was observed that division of cells had place by formation of septum, most probably preceded by the division of mezosome, which is a signal for creating the divisional wall. In the conducted studies, the following phenomena were started: presence of membraneous structure of mezosomes, which is linked with the chain of circular DNA in bacterial cell, appearance of numerous ribosomes in the regions of tangled threads of nucleic acids, and existence of other undefinite elements. Mezosome present in the cell of propionic bacteria is probably linked with the cell wall at least in two places and on the surface of external cell wall at the site of its linking; it causes the change in electronic density, demonstrated by the undefined holes or scars in cell wall. This finding gives the possibility of distinguishing this genus of Propionibacterium, in the respect of morphology, from other bacteria what, in the opinion of the authors, is a new achievement in the studies on the structure of propionic bacteria.
5. Thermal effects of radiation from cellular telephones (United States)
Wainwright, Peter
A finite element thermal model of the head has been developed to calculate temperature rises generated in the brain by radiation from cellular telephones and similar electromagnetic devices. A 1 mm resolution MRI dataset was segmented semiautomatically, assigning each volume element to one of ten tissue types. A finite element mesh was then generated using a fully automatic tetrahedral mesh generator developed at NRPB. There are two sources of heat in the model: firstly the natural metabolic heat production; and secondly the power absorbed from the electromagnetic field. The SAR was derived from a finite difference time domain model of the head, coupled to a model `mobile phone', namely a quarter-wavelength antenna mounted on a metal box. The steady-state temperature distribution was calculated using the standard Pennes `bioheat equation'. In the normal cerebral cortex the high blood perfusion rate serves to provide an efficient cooling mechanism. In the case of equipment generally available to the public, the maximum temperature rise found in the brain was about 0.1 °C. These results will help in the further development of criteria for exposure guidelines, and the technique developed may be used to assess temperature rises associated with SARs for different types of RF exposure.
6. Cellular and molecular aspects of gastric cancer
Institute of Scientific and Technical Information of China (English)
Malcolm G Smith; Georgina L Hold; Eiichi Tahara; Emad M El-Omar
Gastric cancer remains a global killer with a shifting burden from the developed to the developing world.The cancer develops along a multistage process that is defined by distinct histological and pathophysiological phases. Several genetic and epigenetic alterations mediate the transition from one stage to another and these include mutations in oncogenes, tumour suppressor genes and cell cycle and mismatch repair genes. The most significant advance in the fight against gastric caner came with the recognition of the role of Helicobacter pylori (H pylori) as the most important acquired aetiological agent for this cancer. Recent work has focussed on elucidating the complex host/microbial interactions that underlie the neoplastic process. There is now considerable insight into the pathogenesis of this cancer and the prospect of preventing and eradicating the disease has become a reality. Perhaps more importantly, the study of H pylori-induced gastric carcinogenesis offers a paradigm for understanding more complex human cancers. In this review, we examine the molecular and cellular events that underlie H pyloriinduced gastric cancer.
7. Noise Reduction Potential of Cellular Metals
Directory of Open Access Journals (Sweden)
Björn Hinze
Full Text Available Rising numbers of flights and aircrafts cause increasing aircraft noise, resulting in the development of various approaches to change this trend. One approach is the application of metallic liners in the hot gas path of aero-engines. At temperatures of up to 600 °C only metallic or ceramic structures can be used. Due to fatigue loading and the notch effect of the pores, mechanical properties of porous metals are superior to the ones of ceramic structures. Consequently, cellular metals like metallic foams, sintered metals, or sintered metal felts are most promising materials. However, acoustic absorption depends highly on pore morphology and porosity. Therefore, both parameters must be characterized precisely to analyze the correlation between morphology and noise reduction performance. The objective of this study is to analyze the relationship between pore morphology and acoustic absorption performance. The absorber materials are characterized using image processing based on two dimensional microscopy images. The sound absorption properties are measured using an impedance tube. Finally, the correlation of acoustic behavior, pore morphology, and porosity is outlined.
8. Bacterial Cellular Materials as Precursors of Chloroform (United States)
Wang, J.; Ng, T.; Zhang, Q.; Chow, A. T.; Wong, P.
The environmental sources of chloroform and other halocarbons have been intensively investigated because their effects of stratospheric ozone destruction and environmental toxicity. It has been demonstrated that microorganisms could facilitate the biotic generation of chloroform from natural organic matters in soil, but whether the cellular materials itself also serves as an important precursor due to photo-disinfection is poorly known. Herein, seven common pure bacterial cultures (Acinetobacter junii, Aeromonas hydrophila, Bacillus cereus, Bacillus substilis, Escherichia coli, Shigella sonnei, Staphylococcus sciuri) were chlorinated to evaluate the yields of chloroform, dibromochloromethane, dichlorobromomethane, and bromoform. The effects of bromide on these chemical productions and speciations were also investigated. Results showed that, on average, 5.64-36.42 μg-chloroform /mg-C were generated during the bacterial chlorination, in similar order of magnitude to that generated by humic acid (previously reported as 78 μg-chloroform/mg-C). However, unlike humic acid in water chlorination, chloroform concentration did not simply increase with the total organic carbon in water mixture. In the presence of bromide, the yield of brominated species responded linearly to the bromide concentration. This study provides useful information to understand the contributions of chloroform from photodisinfection processes in coastal environments.
9. Myoblast fusion: Experimental systems and cellular mechanisms. (United States)
Schejter, Eyal D
Fusion of myoblasts gives rise to the large, multi-nucleated muscle fibers that power and support organism motion and form. The mechanisms underlying this prominent form of cell-cell fusion have been investigated by a variety of experimental approaches, in several model systems. The purpose of this review is to describe and discuss recent progress in the field, as well as point out issues currently unresolved and worthy of further investigation. Following a description of several new experimental settings employed in the study of myoblast fusion, a series of topics relevant to the current understanding of the process are presented. These pertain to elements of three major cellular machineries- cell-adhesion, the actin-based cytoskeleton and membrane-associated elements- all of which play key roles in mediating myoblast fusion. Among the issues raised are the diversity of functions ascribed to different adhesion proteins (e.g. external cell apposition and internal recruitment of cytoskeleton regulators); functional significance of fusion-associated actin structures; and discussion of alternative mechanisms employing single or multiple fusion pore formation as the basis for muscle cell fusion.
10. Cooperative Handover Management in Dense Cellular Networks
KAUST Repository
Arshad, Rabe
Network densification has always been an important factor to cope with the ever increasing capacity demand. Deploying more base stations (BSs) improves the spatial frequency utilization, which increases the network capacity. However, such improvement comes at the expense of shrinking the BSs\\' footprints, which increases the handover (HO) rate and may diminish the foreseen capacity gains. In this paper, we propose a cooperative HO management scheme to mitigate the HO effect on throughput gains achieved via cellular network densification. The proposed HO scheme relies on skipping HO to the nearest BS at some instances along the user\\'s trajectory while enabling cooperative BS service during HO execution at other instances. To this end, we develop a mathematical model, via stochastic geometry, to quantify the performance of the proposed HO scheme in terms of coverage probability and user throughput. The results show that the proposed cooperative HO scheme outperforms the always best connected based association at high mobility. Also, the value of BS cooperation along with handover skipping is quantified with respect to the HO skipping only that has recently appeared in the literature. Particularly, the proposed cooperative HO scheme shows throughput gains of 12% to 27% and 17% on average, when compared to the always best connected and HO skipping only schemes at user velocity ranging from 80 km/h to 160 Km/h, respectively.
11. Some Properties of topological pressure on cellular automata
Directory of Open Access Journals (Sweden)
Chih-Hung Chang
Full Text Available This paper investigates the ergodicity and the power rule of the topological pressure of a cellular automaton. If a cellular automaton is either leftmost or rightmost premutive (due to the terminology given by Hedlund [Math.~Syst.~Theor.~3, 320-375, 1969], then it is ergodic with respect to the uniform Bernoulli measure. More than that, the relation of topological pressure between the original cellular automaton and its power rule is expressed in a closed form. As an application, the topological pressure of a linear cellular automaton can be computed explicitly.
12. Validation of self-reported cellular phone use
DEFF Research Database (Denmark)
Samkange-Zeeb, Florence; Berg, Gabriele; Blettner, Maria
BACKGROUND: In recent years, concern has been raised over possible adverse health effects of cellular telephone use. In epidemiological studies of cancer risk associated with the use of cellular telephones, the validity of self-reported cellular phone use has been problematic. Up to now there is ......BACKGROUND: In recent years, concern has been raised over possible adverse health effects of cellular telephone use. In epidemiological studies of cancer risk associated with the use of cellular telephones, the validity of self-reported cellular phone use has been problematic. Up to now...... there is very little information published on this subject. METHODS: We conducted a study to validate the questionnaire used in an ongoing international case-control study on cellular phone use, the "Interphone study". Self-reported cellular phone use from 68 of 104 participants who took part in our study...... was compared with information derived from the network providers over a period of 3 months (taken as the gold standard). RESULTS: Using Spearman's rank correlation, the correlation between self-reported phone use and information from the network providers for cellular phone use in terms of the number of calls...
13. Movies of cellular and sub-cellular motion by digital holographic microscopy
Directory of Open Access Journals (Sweden)
Yu Lingfeng
Full Text Available Abstract Background Many biological specimens, such as living cells and their intracellular components, often exhibit very little amplitude contrast, making it difficult for conventional bright field microscopes to distinguish them from their surroundings. To overcome this problem phase contrast techniques such as Zernike, Normarsky and dark-field microscopies have been developed to improve specimen visibility without chemically or physically altering them by the process of staining. These techniques have proven to be invaluable tools for studying living cells and furthering scientific understanding of fundamental cellular processes such as mitosis. However a drawback of these techniques is that direct quantitative phase imaging is not possible. Quantitative phase imaging is important because it enables determination of either the refractive index or optical thickness variations from the measured optical path length with sub-wavelength accuracy. Digital holography is an emergent phase contrast technique that offers an excellent approach in obtaining both qualitative and quantitative phase information from the hologram. A CCD camera is used to record a hologram onto a computer and numerical methods are subsequently applied to reconstruct the hologram to enable direct access to both phase and amplitude information. Another attractive feature of digital holography is the ability to focus on multiple focal planes from a single hologram, emulating the focusing control of a conventional microscope. Methods A modified Mach-Zender off-axis setup in transmission is used to record and reconstruct a number of holographic amplitude and phase images of cellular and sub-cellular features. Results Both cellular and sub-cellular features are imaged with sub-micron, diffraction-limited resolution. Movies of holographic amplitude and phase images of living microbes and cells are created from a series of holograms and reconstructed with numerically adjustable
14. Predict drug-protein interaction in cellular networking. (United States)
Xiao, Xuan; Min, Jian-Liang; Wang, Pu; Chou, Kuo-Chen
15. Am I Halfway? Life Lived = Expected Life
DEFF Research Database (Denmark)
Canudas-Romo, Vladimir; Zarulli, Virginia
.3 and 41.4 for women and men respectively). Nevertheless, declines in mortality at young ages radically changed life expectancy and it is found today at the same level as double of halfway-age. While the period perspective puts halfway-age for females and males at 41 and 38.1 in 2010, for cohorts born...
16. The Greenlandic Life Script and Life Stories
DEFF Research Database (Denmark)
Zaragoza Scherman, Alejandra; Berntsen, Dorthe
Adults older than 40 years remember a significantly greater amount of personal life events from their 15 - 30 years of age. This phenomenon is known as the reminiscence bump (Rubin, Rahal, & Poon, 1998). The reminiscence bump is highly populated by emotionally positive events (Rubin & Berntsen, 2...
17. Life Is Hard But Life Is Good
Institute of Scientific and Technical Information of China (English)
Cancer, paralysis, the death of her loved ones...Lei Juli has experienced more unexpected suffering than most other people. She was once in despair, but she managed to overcome all of these catastrophes. She tells the world what her experience has taught her: Love the life you’re given.
18. The zebrafish goosepimples/myosin Vb mutant exhibits cellular attributes of human microvillus inclusion disease ☆ ☆☆
Sidhaye, Jaydeep; Pinto, Clyde Savio; Dharap, Shweta; Jacob, Tressa; Bhargava, Shobha; Sonawane, Mahendra
Microvillus inclusion disease (MVID) is a life-threatening enteropathy characterised by malabsorption and incapacitating fluid loss due to chronic diarrhoea. Histological analysis has revealed that enterocytes in MVID patients exhibit reduction of microvilli, presence of microvillus inclusion bodies and intestinal villus atrophy, whereas genetic linkage analysis has identified mutations in myosin Vb gene as the main cause of MVID. In order to understand the cellular basis of MVID and the asso...
19. The common ancestry of life
Directory of Open Access Journals (Sweden)
Wolf Yuri I
Full Text Available Abstract Background It is common belief that all cellular life forms on earth have a common origin. This view is supported by the universality of the genetic code and the universal conservation of multiple genes, particularly those that encode key components of the translation system. A remarkable recent study claims to provide a formal, homology independent test of the Universal Common Ancestry hypothesis by comparing the ability of a common-ancestry model and a multiple-ancestry model to predict sequences of universally conserved proteins. Results We devised a computational experiment on a concatenated alignment of universally conserved proteins which shows that the purported demonstration of the universal common ancestry is a trivial consequence of significant sequence similarity between the analyzed proteins. The nature and origin of this similarity are irrelevant for the prediction of "common ancestry" of by the model-comparison approach. Thus, homology (common origin of the compared proteins remains an inference from sequence similarity rather than an independent property demonstrated by the likelihood analysis. Conclusion A formal demonstration of the Universal Common Ancestry hypothesis has not been achieved and is unlikely to be feasible in principle. Nevertheless, the evidence in support of this hypothesis provided by comparative genomics is overwhelming. Reviewers this article was reviewed by William Martin, Ivan Iossifov (nominated by Andrey Rzhetsky and Arcady Mushegian. For the complete reviews, see the Reviewers' Report section.
20. Biodegradable Magnetic Particles for Cellular MRI (United States)
Nkansah, Michael Kwasi
Cell transplantation has the potential to treat numerous diseases and injuries. While magnetic particle-enabled, MRI-based cell tracking has proven useful for visualizing the location of cell transplants in vivo, current formulations of particles are either too weak to enable single cell detection or have non-degradable polymer matrices that preclude clinical translation. Furthermore, the off-label use of commercial agents like Feridex®, Bangs beads and ferumoxytol for cell tracking significantly stunts progress in the field, rendering it needlessly susceptible to market externalities. The recent phasing out of Feridex from the market, for example, heightens the need for a dedicated agent specifically designed for MRI-based cell tracking. To this end, we engineered clinically viable, biodegradable particles of iron oxide made using poly(lactide-co-glycolide) (PLGA) and demonstrated their utility in two MRI-based cell tracking paradigms in vivo. Both micro- and nanoparticles (2.1±1.1 μm and 105±37 nm in size) were highly magnetic (56.7-83.7 wt% magnetite), and possessed excellent relaxometry (r2* relaxivities as high as 614.1 s-1mM-1 and 659.1 s -1mM-1 at 4.7 T respectively). Magnetic PLGA micropartides enabled the in vivo monitoring of neural progenitor cell migration to the olfactory bulb in rat brains over 2 weeks at 11.7 T with ˜2-fold greater contrast-to-noise ratio and ˜4-fold better sensitivity at detecting migrated cells in the olfactory bulb than Bangs beads. Highly magnetic PLGA nanoparticles enabled MRI detection (at 11.7 T) of up to 10 rat mesenchymal cells transplanted into rat brain at 100-μm resolution. Highly magnetic PLGA particles were also shown to degrade by 80% in mice liver over 12 weeks in vivo. Moreover, no adverse effects were observed on cellular viability and function in vitro after labeling a wide range of cells. Magnetically labeled rat mesenchymal and neural stem cells retained their ability to differentiate into multiple
1. Time is Life
CERN Document Server
Soares, D S L
The affirmative statement of the existence of extraterrestrial life is tentatively raised to the status of a principle. Accordingly, Fermi's question is answered and the anthropic principle is shown to be falsifiable. The time-scale for the development of life on Earth and the age of the universe are the fundamental quantities upon which the arguments are framed.
2. Quality Adjusted Life Expectancy
NARCIS (Netherlands)
R. Veenhoven (Ruut)
markdownabstract__Abstract__ The term life expectancy is used for statistical estimates of how long a particular kind of people will live. Such estimates are based on the observed length of life of similar people who have died in the past and on probable future changes in mortality. Used in this se
3. Empowering Students for Life (United States)
Henderson, Nancy
This article describes the new Occupational & Life Skills (OLS) program at Bellevue Community College in Bellevue, Washington. The OLS-Venture program, as it is now called, grew out of a series of continuing education classes in personal finance, cooking, and related life skills for people with autism, obsessive-compulsive disorder and other…
4. It's a Frog's Life (United States)
Coffey, Audrey L.; Sterling, Donna R.
When a preschool teacher unexpectedly found tadpoles in the school's outdoor baby pool, she recognized an unusual opportunity for her students to study pond life up close. By following the tadpoles' development, students learned about frogs, life cycles, habitats. (Contains 1 resource.)
5. Life Cycle Environmental Management
DEFF Research Database (Denmark)
Pedersen, Claus Stig; Jørgensen, Jørgen; Pedersen, Morten Als
processes. The discipline of life cycle environmental management (LCEM) focuses on the incorporation of environmental criteria from the life cycles of products and other company activities into the company management processes. This paper introduces the concept of LCEM as an important element...
6. Life cycle management (LCM)
DEFF Research Database (Denmark)
The chapter gives an introduction to Life Cycle Management (LCM) and shows how LCM can be practiced in different contexts and at different ambition levels.......The chapter gives an introduction to Life Cycle Management (LCM) and shows how LCM can be practiced in different contexts and at different ambition levels....
7. The Feast of Life
Institute of Scientific and Technical Information of China (English)
The enjoyment of life covers many things: the enjoyment of ourselves, of home life, of trees, flowers, clouds, winding rivers and falling cataracts and the myriad things in Nature, and then the enjoyment of poetry, art, contemplation, friendship, conversation, and reading, which are all some form or other of the communion of soirits.
8. A life in books
DEFF Research Database (Denmark)
Siegumfeldt, Inge Birgitte; Auster, Paul
"Paul Auster's A Life in Words--a wide-ranging dialogue between Auster and the Danish professor I.B. Siegumfeldt--is a remarkably candid and often surprising celebration of one writer's art, craft, and life. It includes many revelations that have never been shared before, such as that he doesn...
9. Aspirations of Life
Institute of Scientific and Technical Information of China (English)
Zhou; Ningxin
After entering Senior Three, besides strenuous revisions and examinations, what the students think most and discuss most is aspirations of life. As far as I know, most of my classmates have already specified their choices of majors and their aspirations of life. Some classmates excel in science subjects, so they have chosen science and engineering subjects, hoping that they will become scientists or engineers.
10. The right to life
Directory of Open Access Journals (Sweden)
Dr.Sc. Stavri Sinjari
Full Text Available The right to life constitutes one of the main human rights and freedoms, foreseen by article 21 of the Albanian Constitution and article 2 of European Human Rights Convention. No democratic or totalitarian society can function without guarantees and protection of the human right to life We intend to address these issues on our article: What is life. What we legally understand with life. When the life starts and finish. How this right has evolved. Which is the state interest on protecting the life. Should we consider that the life is the same for all. Should the state interfere at any cost to protect the life. Is there any criminal charge for responsible persons to the violation of this right. Is this issue treated by European Human Rights Court. What are the Albanian legal provisions on protection of this right. This research is performed mainly according to a comparative and analytical methodology. Comperative analysis will be present almost throughout the paper. Treatment of issues of this research will be achieved through a system comparable with international standards in particular and the most advanced legislation in this area. At the same time, this research is conducted by analytical and statistical data processing. We believe that our research will make a modest contribution, not only to the legal literature, but also to criminal policy makers, law makers, lawyers and attorneys.
11. Effect of lysosomotropic molecules on cellular homeostasis. (United States)
Kuzu, Omer F; Toprak, Mesut; Noory, M Anwar; Robertson, Gavin P
Weak bases that readily penetrate through the lipid bilayer and accumulate inside the acidic organelles are known as lysosomotropic molecules. Many lysosomotropic compounds exhibit therapeutic activity and are commonly used as antidepressant, antipsychotic, antihistamine, or antimalarial agents. Interestingly, studies also have shown increased sensitivity of cancer cells to certain lysosomotropic agents and suggested their mechanism of action as a promising approach for selective destruction of cancer cells. However, their chemotherapeutic utility may be limited due to various side effects. Hence, understanding the homeostatic alterations mediated by lysosomotropic compounds has significant importance for revealing their true therapeutic potential as well as toxicity. In this review, after briefly introducing the concept of lysosomotropism and classifying the lysosomotropic compounds into two major groups according to their cytotoxicity on cancer cells, we focused on the subcellular alterations mediated by class-II lysosomotropic compounds. Briefly, their effect on intracellular cholesterol homeostasis, autophagy and lysosomal sphingolipid metabolism was discussed. Accordingly, class-II lysosomotropic molecules inhibit intracellular cholesterol transport, leading to the accumulation of cholesterol inside the late endosomal-lysosomal cell compartments. However, the accumulated lysosomal cholesterol is invisible to the cellular homeostatic circuits, hence class-II lysosomotropic molecules also upregulate cholesterol synthesis pathway as a downstream event. Considering the fact that Niemann-Pick disease, a lysosomal cholesterol storage disorder, also triggers similar pathologic abnormalities, this review combines the knowledge obtained from the Niemann-Pick studies and lysosomotropic compounds. Taken together, this review is aimed at allowing readers a better understanding of subcellular alterations mediated by lysosomotropic drugs, as well as their potential
12. Cellular events and biomarkers of wound healing
Directory of Open Access Journals (Sweden)
Shah Jumaat Mohd. Yussof
Full Text Available Researchers have identified several of the cellular events associated with wound healing. Platelets, neutrophils, macrophages, and fibroblasts primarily contribute to the process. They release cytokines including interleukins (ILs and TNF-α, and growth factors, of which platelet-derived growth factor (PDGF is perhaps the most important. The cytokines and growth factors manipulate the inflammatory phase of healing. Cytokines are chemotactic for white cells and fibroblasts, while the growth factors initiate fibroblast and keratinocyte proliferation. Inflammation is followed by the proliferation of fibroblasts, which lay down the extracellular matrix. Simultaneously, various white cells and other connective tissue cells release both the matrix metalloproteinases (MMPs and the tissue inhibitors of these metalloproteinases (TIMPs. MMPs remove damaged structural proteins such as collagen, while the fibroblasts lay down fresh extracellular matrix proteins. Fluid collected from acute, healing wounds contains growth factors, and stimulates fibroblast proliferation, but fluid collected from chronic, nonhealing wounds does not. Fibroblasts from chronic wounds do not respond to chronic wound fluid, probably because the fibroblasts of these wounds have lost the receptors that respond to cytokines and growth factors. Nonhealing wounds contain high levels of IL1, IL6, and MMPs, and an abnormally high MMP/TIMP ratio. Clinical examination of wounds inconsistently predicts which wounds will heal when procedures like secondary closure are planned. Surgeons therefore hope that these chemicals can be used as biomarkers of wounds which have impaired ability to heal. There is also evidence that the application of growth factors like PDGF will help the healing of chronic, nonhealing wounds.
13. Amplitude metrics for cellular circadian bioluminescence reporters. (United States)
St John, Peter C; Taylor, Stephanie R; Abel, John H; Doyle, Francis J
Bioluminescence rhythms from cellular reporters have become the most common method used to quantify oscillations in circadian gene expression. These experimental systems can reveal phase and amplitude change resulting from circadian disturbances, and can be used in conjunction with mathematical models to lend further insight into the mechanistic basis of clock amplitude regulation. However, bioluminescence experiments track the mean output from thousands of noisy, uncoupled oscillators, obscuring the direct effect of a given stimulus on the genetic regulatory network. In many cases, it is unclear whether changes in amplitude are due to individual changes in gene expression level or to a change in coherence of the population. Although such systems can be modeled using explicit stochastic simulations, these models are computationally cumbersome and limit analytical insight into the mechanisms of amplitude change. We therefore develop theoretical and computational tools to approximate the mean expression level in large populations of noninteracting oscillators, and further define computationally efficient amplitude response calculations to describe phase-dependent amplitude change. At the single-cell level, a mechanistic nonlinear ordinary differential equation model is used to calculate the transient response of each cell to a perturbation, whereas population-level dynamics are captured by coupling this detailed model to a phase density function. Our analysis reveals that amplitude changes mediated at either the individual-cell or the population level can be distinguished in tissue-level bioluminescence data without the need for single-cell measurements. We demonstrate the effectiveness of the method by modeling experimental bioluminescence profiles of light-sensitive fibroblasts, reconciling the conclusions of two seemingly contradictory studies. This modeling framework allows a direct comparison between in vitro bioluminescence experiments and in silico ordinary
14. Viscoelastic properties of cellular polypropylene ferroelectrets (United States)
Gaal, Mate; Bovtun, Viktor; Stark, Wolfgang; Erhard, Anton; Yakymenko, Yuriy; Kreutzbruck, Marc
Viscoelastic properties of cellular polypropylene ferroelectrets (PP FEs) were studied at low frequencies (0.3-33 Hz) by dynamic mechanical analysis and at high frequencies (250 kHz) by laser Doppler vibrometry. Relaxation behavior of the in-plane Young's modulus ( Y11 ' ˜ 1500 MPa at room temperature) was observed and attributed to the viscoelastic response of polypropylene matrix. The out-of-plane Young's modulus is very small ( Y33 ' ≈ 0.1 MPa) at low frequencies, frequency- and stress-dependent, evidencing nonlinear viscoelastic response of PP FEs. The high-frequency mechanical response of PP FEs is shown to be linear viscoelastic with Y33 ' ≈ 0.8 MPa. It is described by thickness vibration mode and modeled as a damped harmonic oscillator with one degree of freedom. Frequency dependence of Y33 * in the large dynamic strain regime is described by the broad Cole-Cole relaxation with a mean frequency in kHz range attributed to the dynamics of the air flow between partially closed air-filled voids in PP FEs. Switching-off the relaxation contribution causes dynamic crossover from the nonlinear viscoelastic regime at low frequencies to the linear viscoelastic regime at high frequencies. In the small strain regime, contribution of the air flow seems to be insignificant and the power-law response, attributed to the mechanics of polypropylene cell walls and closed air voids, dominates in a broad frequency range. Mechanical relaxation caused by the air flow mechanism takes place in the sound and ultrasound frequency range (10 Hz-1 MHz) and, therefore, should be taken into account in ultrasonic applications of the PP FEs deal with strong exciting or receiving signals.
15. Cysteinyl-Leukotriene Receptors and Cellular Signals
Directory of Open Access Journals (Sweden)
G. Enrico Rovati
Full Text Available Cysteinyl-leukotrienes (cysteinyl-LTs exert a range of proinflammatory effects, such as constriction of airways and vascular smooth muscle, increase of endothelial cell permeability leading to plasma exudation and edema, and enhanced mucus secretion. They have proved to be important mediators in asthma, allergic rhinitis, and other inflammatory conditions, including cardiovascular diseases, cancer, atopic dermatitis, and urticaria. The classification into subtypes of the cysteinyl-LT receptors (CysLTRs was based initially on binding and functional data, obtained using the natural agonists and a wide range of antagonists. CysLTRs have proved remarkably resistant to cloning. However, in 1999 and 2000, the CysLT1R and CysLT2R were successfully cloned and both shown to be members of the G-protein coupled receptors (GPCRs superfamily. Molecular cloning has confirmed most of the previous pharmacological characterization and identified distinct expression patterns only partially overlapping. Recombinant CysLTRs couple to the Gq/11 pathway that modulates inositol phospholipids hydrolysis and calcium mobilization, whereas in native systems, they often activate a pertussis toxin-insensitive Gi/o-protein, or are coupled promiscuously to both G-proteins. Interestingly, recent data provide evidence for the existence of an additional receptor subtype that seems to respond to both cysteinyl-LTs and uracil nucleosides, and of an intracellular pool of CysLTRs that may have roles different from those of plasma membrane receptors. Finally, a cross-talk between the cysteinyl-LT and the purine systems is being delineated. This review will summarize recent data derived from studies on the molecular and cellular pharmacology of CysLTRs.
16. The cellular composition of the marsupial neocortex. (United States)
In the current investigation we examined the number and proportion of neuronal and non-neuronal cells in the primary sensory areas of the neocortex of a South American marsupial, the short-tailed opossum (Monodelphis domestica). The primary somatosensory (S1), auditory (A1), and visual (V1) areas were dissected from the cortical sheet and compared with each other and the remaining neocortex using the isotropic fractionator technique. We found that although the overall sizes of V1, S1, A1, and the remaining cortical regions differed from each other, these divisions of the neocortex contained the same number of neurons, but the remaining cortex contained significantly more non-neurons than the primary sensory regions. In addition, the percent of neurons was higher in A1 than in the remaining cortex and the cortex as a whole. These results are similar to those seen in non-human primates. Furthermore, these results indicate that in some respects, such as number of neurons, the neocortex is homogenous across its extent, whereas in other aspects of organization, such as non-neuronal number and percentage of neurons, there is non-uniformity. Whereas the overall pattern of neuronal distribution is similar between short-tailed opossums and eutherian mammals, short-tailed opossum have a much lower cellular and neuronal density than other eutherian mammals. This suggests that the high neuronal density cortices of mammals such as rodents and primates may be a more recently evolved characteristic that is restricted to eutherians, and likely contributes to the complex behaviors we see in modern mammals.
17. Cellular aging: theories and technological influence
Directory of Open Access Journals (Sweden)
Silvia Mercado-Sáenz
Full Text Available The aim of this article was to review the factors that influence the aging, relationship of aging with the biological rhythms and new technologies as well as the main theories to explain the aging, and to analysis the causes of aging. The theories to explain the aging could be put into two groups: those based on a program that controlled the regression of the organism and those that postulated that the deterioration was due to mutations. It was concluded that aging was a multifactorial process. Genetic factors indicated the maximum longevity of the individual and environmental factors responsible for the real longevity of the individual. It would be necessary to guarantee from early age the conservation of a natural life rhythm.
18. Evolution from Cellular to Social Scales
CERN Document Server
Skjeltorp, Arne T
19. Origin of Life
CERN Document Server
Lal, Ashwini Kumar
The evolution of life has been a big enigma despite rapid advancements in the field of astrobiology, astrophysics and genetics in recent years. The answer to this puzzle has been as mindboggling as the riddle relating to evolution of Universe itself. Despite the fact that panspermia has gained considerable support as a viable explanation for origin of life on the Earth and elsewhere in the Universe, the issue however, remains far from a tangible solution. This paper examines the various prevailing hypotheses regarding origin of life like abiogenesis, RNA(ribonucleic acid) world, iron-sulphur world, panspermia, and concludes that delivery of life-bearing organic molecules by the comets in the early epoch of the Earth alone possibly was not responsible for kickstarting the process of evolution of life on our planet.
20. Everyday Family Life
DEFF Research Database (Denmark)
Westerling, Allan
What are the implications of ongoing processes of modernization and individualization for social relations in everyday life? This overall research question is the pivotal point in empirical studies at the Centre of Childhood-, Youth- and Family Life Research at Roskilde University. One research...... project takes a social psychological approach, combining quantitative and qualitative methods in a longitudinal study of family life. The knowledge interest of the project is the constitution of communality and individuality in everyday family life. This article presents the theoretical framework...... and the conceptualization of everyday family life of the social psychological research agenda in this field. The main line of argument is that ongoing modernization is synonymous with accelerated processes of detraditionalization and individualization. This calls for a re-conceptualisation of ‘the family’ which enables...
1. Life in Extreme Environments (United States)
Rothschild, Lynn; Bray, James A. (Technical Monitor)
Each recent report of liquid water existing elsewhere in the solar system has reverberated through the international press and excited the imagination of humankind. Why? Because in the last few decades we have come to realize that where there is liquid water on Earth, virtually no matter what the physical conditions, there is life. What we previously thought of as insurmountable physical and chemical barriers to life, we now see as yet another niche harboring 'extremophiles'. This realization, coupled with new data on the survival of microbes in the space environment and modeling of the potential for transfer of life between celestial bodies, suggests that life could be more common than previously thought. Here we critically examine what it means to be an extremophile, the implications of this for evolution, biotechnology, and especially the search for life in the cosmos.
2. Life History Approach
DEFF Research Database (Denmark)
Olesen, Henning Salling
as in everyday life. Life histories represent lived lives past, present and anticipated future. As such they are interpretations of individuals’ experiences of the way in which societal dynamics take place in the individual body and mind, either by the individual him/herself or by another biographer. The Life...... History approach was developing from interpreting autobiographical and later certain other forms of language interactive material as moments of life history, i.e. it is basically a hermeneutic approach. Talking about a psycho-societal approach indicates the ambition of attacking the dichotomy...... of the social and the psychic, both in the interpretation procedure and in some main theoretical understandings of language, body and mind. My article will present the reflections on the use of life history based methodology in learning and education research as a kind of learning story of research work....
3. DNA Mismatch Repair System: Repercussions in Cellular Homeostasis and Relationship with Aging
Directory of Open Access Journals (Sweden)
Juan Cristóbal Conde-Pérezprina
Full Text Available The mechanisms that concern DNA repair have been studied in the last years due to their consequences in cellular homeostasis. The diverse and damaging stimuli that affect DNA integrity, such as changes in the genetic sequence and modifications in gene expression, can disrupt the steady state of the cell and have serious repercussions to pathways that regulate apoptosis, senescence, and cancer. These altered pathways not only modify cellular and organism longevity, but quality of life (“health-span”. The DNA mismatch repair system (MMR is highly conserved between species; its role is paramount in the preservation of DNA integrity, placing it as a necessary focal point in the study of pathways that prolong lifespan, aging, and disease. Here, we review different insights concerning the malfunction or absence of the DNA-MMR and its impact on cellular homeostasis. In particular, we will focus on DNA-MMR mechanisms regulated by known repair proteins MSH2, MSH6, PMS2, and MHL1, among others.
4. Monocyte Activation in Immunopathology: Cellular Test for Development of Diagnostics and Therapy
Directory of Open Access Journals (Sweden)
Ekaterina A. Ivanova
Full Text Available Several highly prevalent human diseases are associated with immunopathology. Alterations in the immune system are found in such life-threatening disorders as cancer and atherosclerosis. Monocyte activation followed by macrophage polarization is an important step in normal immune response to pathogens and other relevant stimuli. Depending on the nature of the activation signal, macrophages can acquire pro- or anti-inflammatory phenotypes that are characterized by the expression of distinct patterns of secreted cytokines and surface antigens. This process is disturbed in immunopathologies resulting in abnormal monocyte activation and/or bias of macrophage polarization towards one or the other phenotype. Such alterations could be used as important diagnostic markers and also as possible targets for the development of immunomodulating therapy. Recently developed cellular tests are designed to analyze the phenotype and activity of living cells circulating in patient’s bloodstream. Monocyte/macrophage activation test is a successful example of cellular test relevant for atherosclerosis and oncopathology. This test demonstrated changes in macrophage activation in subclinical atherosclerosis and breast cancer and could also be used for screening a panel of natural agents with immunomodulatory activity. Further development of cellular tests will allow broadening the scope of their clinical implication. Such tests may become useful tools for drug research and therapy optimization.
5. Cardiac regeneration and cellular therapy: is there a benefit of exercise? (United States)
Figueiredo, P A; Appell Coriolano, H-J; Duarte, J A
Cardiovascular diseases (CVD) are a global epidemic in developed countries. Cumulative evidence suggests that myocyte formation is preserved during postnatal life, in adulthood or senescence, suggesting the existence of a growth reserve of the heart throughout lifespan. Several medical therapeutic approaches to CVD have considerably improved the clinical outcome for patients. Intense interest has been focused on regenerative medicine as an emerging strategy for CVD. Cellular therapeutic approaches have been proposed for enhancing survival and propagation of stem cells in myocardium, leading to cardiac cellular repair. Strong epidemiological and clinical data exists concerning the impact of regular physical exercise on cardiovascular health. Several mechanisms of acute and chronic exercise-induced cardiovascular adaptations to exercise have been presented, considering primary and secondary prevention of CVD. In this context, exercise-related improvements in the function and regeneration of the cardiovascular system may be associated with the exercise-induced activation, mobilization, differentiation, and homing of stem and progenitor cells. In this review several topics will be addressed concerning the relation between exercise, recruitment and biological activity of blood-circulating progenitor cells and resident cardiac stem cells. We hypothesize that exercise-induced stem cell activation may enhance overall heart function and improve the efficacy of cardiac cellular therapeutic protocols.
6. DNA Mismatch Repair System: Repercussions in Cellular Homeostasis and Relationship with Aging (United States)
Conde-Pérezprina, Juan Cristóbal; León-Galván, Miguel Ángel; Konigsberg, Mina
7. What traces of life can we expect on Mars? Lessons from the early Earth (United States)
Westall, F.
crytic but abundant evidence of past life [3] in the form of fossilised microbial colonies on the surfaces of detrital volcanic grains, in fine volcanic dust deposits, and in the pores of scoriaceous pumice, etc (Fig. 2). Again, these traces can be identified only through petrographic thin section and SEM study. The bulk organic carbon contents of these rocks is very low, ~0.01-0.05% and their C-isotope signature (~ - 25 ‰), although indicative of life, could also be produced through abiological processes [5]. Only the combination of multiple analytical techniques, of which high resolution microscopy is one of the most fundamental, permitted a biogenic origin to be attributed to these structures. Biolaminated sediments, including domal stromatolites, in Early Archaean terrains are the result of anaerobic photosynthetic activity [6-9]. Photosynthesis is a relatively evolved metabolism. Evidence of photosynthetic activity is preserved in the rhythmic laminations found in sediments deposited at the edges of shallow basins due to the growth of photosynthetic microbial mats on the sediment surfaces. These laminations, ranging from a few tens of microns to packets up to a couple of millimetres in thickness, are macroscopically and microscopically visible (Fig. 3). Given sufficient tectonic stability of the shallow water, carbonate platform environments in which they form, photosynthetic microorganisms on the early Earth formed domical stromatolites. In the case of biolaminated sediments, bulk organic carbon contents are again low (0.01 %) but the individual biolaminae have a higher carbon content (0.07%). Certain highly carbonaceous biomaminated cherts have carbon contents ranging up to 0.5% [10]. Photosynthetic organisms, however, are not only restricted to stable substrates and may also be planktonic, living free in the upper layers of water bodies. Evidence of planktonic microorganisms on the early Earth has been suggested by [10]. Whether floating in the ocean or forming
8. Emergence of Life
Directory of Open Access Journals (Sweden)
Marie-Paule Bassez
Full Text Available Indeed, even if we know that many individual components are necessary for life to exist, we do not yet know what makes life emerge. One goal of this journal Life is to juxtapose articles with multidisciplinary approaches and perhaps to answer in the near future this question of the emergence of life. Different subjects and themes will be developed, starting of course with the multiple definitions of life and continuing with others such as: life diversity and universality; characteristics of living systems; thermodynamics with energy and entropy; kinetics and catalysis; water in its different physical states; circulation of sap and blood and its origin; the first blood pump and first heart; the first exchange of nutrients between cells, sap and blood; essential molecules of living systems; chirality; molecular asymmetry and its origin; formation of enantiomer excess and amplification; microscopic observations on a micrometer and sub-micrometer scales, at molecular and atomic levels; the first molecules at the origin of genetic information, viroids, circular RNA; regions of space or the area inside membranes and cells capable of initiating and maintaining life; phenomena at the origin of the emergence of life; molecules studied in the traditional field of chemistry and in the recent field of nanoscience governed by new laws; interaction between the individual molecules and components of living systems; interaction between living systems and the environment; transfer of information through generations; continuation of life from one generation to the next; prebiotic chemistry and prebiotic signatures on Earth, on Mars, on other planets; biosignatures of the first forms of life; fossils and pseudofossils dating 3.5 Ga ago and more recent ones; experimental fossilization; pluricellular eukaryotes dating 2.1 Ga ago; sudden increase in oxygen in the atmosphere around 2.0 to 2.5 Ga ago and its relation to geology; shell symmetry; aging with
9. Emergence of Life. (United States)
Bassez, Marie-Paule
10. Influence of income on tertiary students acquisition of cellular products
Directory of Open Access Journals (Sweden)
G. A.P Drotsky
Full Text Available Purpose: The purpose of the article is to determine whether there are any differences between high and low-income group students in their selection of a cellular phone brand or network operator. Design/Methodology/Approach: Four hypotheses are set to determine if there are any significant differences between the two income groups in current decision-making. It is established that there exist no significant difference between high and low-income students in their selection of cellular phones and network operators. The levels of agreement or disagreement on various statements do, however, give an indication of the importance that students place on aspects that they view as important when acquiring a cellular phone or network operator.Findings: In the article, it is established that no significant differences exist between the two income groups. The levels of agreement or disagreement indicate the importance that subscription method, social value, service quality and branding has on student decision-making. Implications: The article provides a better understanding of the influence that income plays in student's decision-making in acquiring cellular products and services. Possible future research in student cellular usage can be guided through the information obtained in this article. Originality/Value: The article provides information to cellular network operators, service providers and cellular phone manufactures regarding the influence of income on students' acquisition of cellular products and services. Information from the article can assist in the establishment of marketing plans for the student market by these role players.
11. An algebraic study of unitary one dimensional quantum cellular automata
CERN Document Server
Arrighi, P
We provide algebraic characterizations of unitary one dimensional quantum cellular automata. We do so both by algebraizing existing decision procedures, and by adding constraints into the model which do not change the quantum cellular automata's computational power. The configurations we consider have finite but unbounded size.
12. Infinite Time Cellular Automata: A Real Computation Model
CERN Document Server
Givors, Fabien; Ollinger, Nicolas
We define a new transfinite time model of computation, infinite time cellular automata. The model is shown to be as powerful than infinite time Turing machines, both on finite and infinite inputs; thus inheriting many of its properties. We then show how to simulate the canonical real computation model, BSS machines, with infinite time cellular automata in exactly \\omega steps.
Institute of Scientific and Technical Information of China (English)
XIE Huimin
The limit languages of cellular automata are defined and theircomplexity are discussed. New tools, which include skew evolution, skew periodic string, trace string, some algebraic calculation method, and restricted membership problem, are developed through a discussion focusing on the limit language of an elementary cellular automata of rule 94.It is proved that this language is non-regular.
14. Cellularity of diagram algebras as twisted semigroup algebras
CERN Document Server
Wilcox, Stewart
The Temperley-Lieb and Brauer algebras and their cyclotomic analogues, as well as the partition algebra, are all examples of twisted semigroup algebras. We prove a general theorem about the cellularity of twisted semigroup algebras of regular semigroups. This theorem, which generalises a recent result of East about semigroup algebras of inverse semigroups, allows us to easily reproduce the cellularity of these algebras.
15. Teen Perceptions of Cellular Phones as a Communication Tool (United States)
Jonas, Denise D.
The excitement and interest in innovative technologies has spanned centuries. However, the invention of the cellular phone has surpassed previous technology interests, and changed the way we communicate today. Teens make up the fastest growing market of current cellular phone users. Consequently, the purpose of this study was to determine teen…
16. Cellular phones: to talk or not to talk. (United States)
Munshi, Anusheel
Cellular phone use has exponentially increased in recent years. There have been some reports of an association of use of these phones with brain tumours. This article gives a summary view of the possible effects related to cellular phone use. It further discusses if we need to observe precautions while using these devices.
17. Fluorescopic evaluation of protein-lipid relations in cellular signalling.
NARCIS (Netherlands)
Pap, E.H.W.
IntroductionCellular communication is partly mediated through the modulation of protein activity, structure and dynamics by lipids. In contrast to the biochemical aspects of lipid signalling, relatively little is known about the physical properties of the "signal" lipids (lipids involved in cellular
18. 47 CFR 22.911 - Cellular geographic service area. (United States)
... PUBLIC MOBILE SERVICES Cellular Radiotelephone Service § 22.911 Cellular geographic service area. The... Watts (2) The distance from a cell transmitting antenna located in the Gulf of Mexico Service Area (GMSA... for unserved area applications proposing a cell with an ERP not exceeding 10 Watts, the value for...
19. Insights Into Quantitative Biology: analysis of cellular adaptation
Agoni, Valentina
In the last years many powerful techniques have emerged to measure protein interactions as well as gene expression. Many progresses have been done since the introduction of these techniques but not toward quantitative analysis of data. In this paper we show how to study cellular adaptation and how to detect cellular subpopulations. Moreover we go deeper in analyzing signal transduction pathways dynamics.
20. Life on Titan (United States)
Potashko, Oleksandr
Volcanoes engender life on heavenly bodies; they are pacemakers of life. All planets during their period of formation pass through volcanism hence - all planets and their satellites pass through the life. Tracks of life If we want to find tracks of life - most promising places are places with volcanic activity, current or past. In the case of just-in-time volcanic activity we have 100% probability to find a life. Therefore the most perspective “search for life” are Enceladus, Io and comets, further would be Venus, Jupiter’s satellites, Saturn’s satellites and first of all - Titan. Titan has atmosphere. It might be result of high volcanic activity - from one side, from other side atmosphere is a necessary condition development life from procaryota to eucaryota. Existence of a planet means that all its elements after hydrogen formed just there inside a planet. The forming of the elements leads to the formation of mineral and organic substances and further to the organic life. Development of the life depends upon many factors, e.g. the distance from star/s. The intensity of the processes of the element formation is inversely to the distance from the star. Therefore we may suppose that the intensity of the life in Mercury was very high. Hence we may detect tracks of life in Mercury, particularly near volcanoes. The distance from the star is only one parameter and now Titan looks very active - mainly due to interior reason. Its atmosphere compounds are analogous to comet tail compounds. Their collation may lead to interesting result as progress occurs at one of them. Volcanic activity is as a source of life origin as well a reason for a death of life. It depends upon the thickness of planet crust. In the case of small thickness of a crust the probability is high that volcanoes may destroy a life on a planet - like Noachian deluge. Destroying of the life under volcano influences doesn’t lead to full dead. As result we would have periodic Noachian deluge or |
1b2a55c99bb5c922 | SYM: A new symmetry -- finding package for Mathematica. A new package for computing the symmetries of systems of differential equations using Mathematica is presented. Armed with adaptive equation solving capability and pattern matching techniques, this package is able to handle systems of differential equations of arbitrary order and number of variables with the least memory cost possible. By harnessing the capabilities of Mathematica’s front end, all the intermediate mathematical expressions, as well as the final results apear in familiar form. This renders the package a very useful tool for introducing the symmetry solving method to students and non-mathematicians.
References in zbMATH (referenced in 32 articles )
Showing results 1 to 20 of 32.
Sorted by year (citations)
1 2 next
1. Abdulwahhab, Muhammad Alim; Jhangeer, Adil: Symmetries and generalized higher order conserved vectors of the wave equation on Bianchi I spacetime (2017)
2. Dimas, Stylianos; Freire, Igor Leite: Study of a fifth order PDE using symmetries (2017)
3. Matadi, Maba Boniface: Symmetry and conservation laws for tuberculosis model (2017)
4. Sinuvasan, R.; Paliathanasis, Andronikos; Morris, Richard M.; Leach, Peter G.L.: Solution of the master equation for quantum Brownian motion given by the Schrödinger equation (2017)
5. Massoukou, R.Y.M’pika; Govinder, K.S.: Symmetry analysis for hyperbolic equilibria using a TB/dengue fever model (2016)
6. Paliathanasis, Andronikos; Krishnakumar, K.; Tamizhmani, K.M.; Leach, Peter G.L.: Lie symmetry analysis of the Black-Scholes-Merton model for European options with stochastic volatility (2016)
8. Paliathanasis, Andronikos; Morris, Richard M.; Leach, Peter G.L.: Lie symmetries of $(1+2)$ nonautonomous evolution equations in financial mathematics (2016)
9. Paliathanasis, Andronikos; Vakili, Babak: Closed-form solutions of the Wheeler-DeWitt equation in a scalar-vector field cosmological model by Lie symmetries (2016)
10. Casati, Matteo: On deformations of multidimensional Poisson brackets of hydrodynamic type (2015)
11. Krishnakumar, K.; Tamizhmani, K.M.; Leach, P.G.L.: Algebraic solutions of the Hirota bilinear form for the Korteweg-de Vries and Boussinesq equations (2015)
12. Tamizhmani, K.M.; Krishnakumar, K.; Leach, P.G.L.: Symmetries and reductions of order for certain nonlinear third- and second-order differential equations with arbitrary nonlinearity (2015)
14. Tamizhmani, K.M.; Krishnakumar, K.; Leach, P.G.L.: Algebraic resolution of equations of the Black-Scholes type with arbitrary time-dependent parameters (2014)
15. Adem, Abdullahi Rashid; Khalique, Chaudry Masood: New exact solutions and conservation laws of a coupled Kadomtsev-Petviashvili system (2013)
16. Bozhkov, Y.; Dimas, S.: Group classification and conservation laws for a two-dimensional generalized Kuramoto-Sivashinsky equation (2013)
17. Leach, P.G.L.: Derivatives of differential sequences (2013)
18. O’Hara, J.G.; Sophocleous, C.; Leach, P.G.L.: Application of Lie point symmetries to the resolution of certain problems in financial mathematics with a terminal condition (2013)
19. O’Hara, J.G.; Sophocleous, C.; Leach, P.G.L.: Symmetry analysis of a model for the exercise of a barrier option (2013)
20. Sophocleous, C.; Leach, P.G.L.: Thin films: increasing the complexity of the model (2012)
1 2 next |
4ffdc2cd95370626 | Tuesday, 4 July 2017
UNDERSTANDING NATURAL PHENOMENA: Self-Organization and Emergence in Complex Systems
Vinod Wadhawan
Book details
Paperback: 514 pages
Publisher: CreateSpace Independent Publishing Platform; 2 edition (September 13, 2017)
Language: English
ISBN-10: 1548527939
ISBN-13: 978-1548527938
Product Dimensions: 6.7 x 1.3 x 9.6 inches
Shipping Weight: 2.2 pounds
Legend for the front cover
A flower is a work of art, but there is no artist involved. The flower evolved from lesser things which, in turn, evolved from still lesser things, and so on, all the way down. For example, the symmetry of a flower is the end result of a long succession of spontaneous processes and events, as also of some simple ‘local rules’ in operation, all constrained, even aided, by the infallible second law of thermodynamics for ‘open’ systems. In fact, the second law is the mother of all organizing principles, leading to the enormous amounts of cumulative self-organization, structure, symmetry, and ‘emergence’ we see in Nature.
About the book
Science is all about trying to understand natural phenomena under the strict discipline imposed by the celebrated scientific method. Practically all the systems we encounter in Nature are dynamical systems, meaning that they evolve with time. Among them there are the ‘simple’ or ‘simplifiable’ systems, which can be handled by traditional, reductionistic science; and then there are 'complex’ systems, for which nonreductionistic approaches have to be attempted for understanding their evolution. In this book the author makes a case that a good way to understand a large number of natural phenomena, both simple and complex, is to focus on their self-organization and emergence aspects. Self-organization and emergence are rampant in Nature and, given enough time, their cumulative effects can be so mind-boggling that many people have great difficulty believing that there is no designer involved in the emergence of all the structure and order we see around us. But it is really quite simple to understand how and why we get so much ‘order for free’. It all happens because, as ordained by the infallible second law of thermodynamics, all ‘thermodynamically open’ systems in our ever-expanding and cooling (and therefore gradient-creating) universe constantly tend to move towards equilibrium and stability, often ending up in ordered configurations. In other words, order emerges because Nature tends to find efficient ways to annul gradients of all types.
This book will help you acquire a good understanding of the essential features of many natural phenomena, via the complexity-science route. It has four parts: (1) Complexity Basics; (2) Pre-Human Evolution of Complexity; (3) Humans and the Evolution of Complexity; and (4) Appendices. The author gives centrestage to the second law of thermodynamics for ‘open’ systems, which he describes as ‘the mother of all organizing principles’. He also highlights a somewhat unconventional statement of this law: ‘Nature abhors gradients’.
The book is written at two levels, one of which hardly uses any mathematical equations; the mathematical treatment of some relevant topics has been pushed to the last part of the book, in the form of ten appendices. Therefore the book should be accessible to a large readership. It is a general-science book written in a reader-friendly language, but without any dumbing down of the narrative.
I am a scientist and I take pride in the fact that we humans have invented and perfected the all-important scientific method for investigating natural phenomena. Wanting to understand natural phenomena is an instinctive urge in all of us. In this book I make a case that taking the complexity-science route for satisfying this urge can be a richly rewarding experience. Complexity science enables us (fully or partially) to find answers to even the most fundamental questions we may ask about ourselves and about our universe. We call them the Big Questions: How did our universe emerge out of ‘nothing’ at a certain point in time; or is it that it has been there always? Why and how has structure arisen in our universe: galaxies, stars, planets, life forms? How did life emerge out of nonlife? How does intelligence emerge out of nonintelligence? These are difficult questions. But, as Mark Twain is said to have said, ‘there is something fascinating about science. One gets such wholesale of conjecture out of such a trifling investment of fact’. As you will see in this book, the Big Questions, as also many others, can be answered with a good amount of credibility by using just the following ‘trifling investment of facts’:
1. Gradients tend to be obliterated spontaneously. Concentration gradients, temperature gradients, pressure gradients, etc. all tend to decrease spontaneously, till a state of equilibrium is reached, after which the gradients cannot fall any further. This is actually nothing but a nonstatistical-mechanical version of the second law of thermodynamics. [Why do gradients arise at all, at a cosmic level? The original cause of all gradients in the cosmos is the continual expansion and cooling of our universe. At the local (terrestrial) level, the energy impinging on our ecosphere from the Sun is the main factor creating gradients.]
2. It requires energy to prevent a gradient from annulling itself, or to create a new gradient. A refrigerator works on this principle, as also so many other devices.
3. Left to themselves, things go from a state of less disorder to a state of more disorder, spontaneously. This is the more familiar version of the second law of thermodynamics. Examples abound. Molecules in a gas occupy a larger volume spontaneously if the larger volume is made available to them; but there is practically no way they would occupy the smaller volume again, on their own.
4. If a system is not left to itself, i.e., if it is not an isolated system and can therefore exchange energy and/or matter with its surroundings, then a state of lower disorder can sometimes arise locally. [This is in keeping with the second law of thermodynamics, as generalized to cover ‘thermodynamically open’ systems also.] Growth of a crystal from a fluid is an example. A crystal has a remarkably high degree of order and design, even though there is no designer involved. To borrow a phrase from Stuart Kauffman, this is ‘order for free’.
5. If a sustained input of energy drives a system far away from equilibrium, the system may develop a structure or tendencies which enable it to dissipate energy more and more efficiently. This is called dissipation-driven adaptive organization. England (2013) has shown that all dynamical evolution is more likely to lead to structures and systems which get better and better at absorbing and dissipating energy from the environment.
6. The total energy of the universe is conserved. This is known as the energy-conservation principle. Since energy and mass are interconvertible, the term ‘energy’ used here really means ‘mass plus energy’.
7. Natural phenomena are governed by the laws of quantum mechanics. Classical mechanics, though adequate for understanding many day-to-day or ‘macroscopic’ phenomena, is only a special, limiting, case of quantum mechanics.
8. There is an uncertainty principle in quantum mechanics, one version of which says that the energy-conservation principle can be violated, though only for a very small, well-specified duration. The larger the violation of energy conservation, the smaller this duration is.
9. It can be understood fully in terms of the second law of thermodynamics that in a system of interacting entities, entirely new (unexpected) behaviour or properties can arise if the interactions are appropriate and strong enough. ‘More is different’ (Anderson 1972). The technical term for this occurrence is emergence. Complexity science is mostly about self-organization and emergence, and we shall encounter many examples of them in this book. To mention a couple of them here: the emergence of life out of nonlife; and the emergence of human intelligence in a system of nonintelligent entities, namely the neurons. Interestingly, the second law of thermodynamics is itself an emergent law. The motion of a molecule is governed by classical or ‘Newtonian’ mechanics, which has time-reversal symmetry, meaning that if you could somehow reverse the direction of time, the Newtonian equations of motion would still hold. And yet, when you put a large number of these molecules together, there are interactions among them and there emerges a direction of time: Time increases in the direction in which overall disorder increases. As I shall discuss later in the book, even the causality principle is an emergent principle.
10. The dynamics of evolution of a complex system of interacting entities is mostly through the operation of ‘local rules’. Chua (1998) has introduced the important notion of cellular nonlinear networks (CNNs), and enunciated a local-activity dogma. According to it, in order for a ‘nonconservative’ system or model to exhibit any form of complexity, the associated CNN parameters must be such that that either the cells or their couplings are locally active.
11. The most adaptable are the most likely to survive and propagate. Any species, if it is not to become extinct, must be able to survive and propagate, in an environment in which there is always some intra-species and/or inter-species competition because different individuals may all have to fight for the same limited resources like food or space. The fittest individuals or groups for this task (i.e., the most adaptable ones) stand a greater chance of winning this game and, as a result, the population gets better and better (more adapted) at survival and propagation in the prevailing conditions: the more adaptable or ‘fitter’ ones are not only more likely to survive, but also stand a greater chance to pass on their genes to the next generation.
It is remarkable that an enormous number and variety of natural phenomena can be understood in terms of just these few ‘commonsense’ facts, by adopting the complexity-science approach. Complexity science helps us understand, to a small or large extent, even those natural phenomena which fall outside the scope of conventional reductionistic science.
What is complexity science, and how is its operational space different from that of conventional science? Let us begin by answering the question: What does the phrase ‘system under investigation’ mean in conventional science? Strictly speaking, since everything interacts with everything else, the entire cosmos is one big single system. But such an approach cannot take us very far because it is neither tractable nor useful. So, depending on our interest, we define a subsystem which is a ‘quasi-isolated system’. A quasi-isolated system is an imaginary construct, such that what is outside it can be, to a good approximation, treated as an unchanging (usually large) ‘background’, or ‘heat bath’ etc. This approach is so common in conventional science that we just say ‘system’ when what we really mean is a carefully identified quasi-isolated system. An example from rocket science will illustrate the point. For predicting the initial trajectory of a rocket, we can assume safely that a truck moving an adequate distance away from the launching site will not affect the trajectory significantly. Conventional science deals mostly with such ‘simple’ or ‘simplifiable’ systems. Complexity science, by contrast, deals with systems which must be treated in their totality; for them it is mostly not possible to identify a ‘quasi-isolated subpart’.
By definition, a complex system is one which comprises of a large number of ‘members’, ‘elements’ or ‘agents’, which interact substantially with one another and with the environment, and which have the potential to generate qualitatively new collective behaviour. That is, there can be an emergence of new (unexpected) spatial, temporal, or functional structures or patterns. Different complex systems have different ‘degrees of complexity’, and the amount of information needed to describe the structure and function of a system is one of the measures of that degree of complexity (Wadhawan 2010).
‘Complexity’ is something we associate with a complex system (defined above). It is a technical term, and does not mean the same thing as ‘complicatedness’.
The idea of writing this book took shape when I was working on my book Smart Structures: Blurring the Distinction between the Living and the Nonliving (Wadhawan 2007). Naturally, there was extensive exposure to concepts from complexity science. Like the subject of smart structures, complexity science also cuts across various disciplines, and highlights the basic unity of all science. The uneasy feeling grew in me that, in spite of the fact that complexity is so pervasive and important, it is not introduced as a well-defined subject even to science students. They are all taught, say, thermodynamics and quantum mechanics routinely, but not complexity science. Even among research workers, although a large number are working on one complex system or another (and not just in physics or chemistry, but also in biology, brain science, computational science, economics, etc.), not many have learnt about the basics of complexity science in a coherent manner at an early stage of their career. I have tried to write a book on complexity that takes this subject to the classroom at a fairly introductory but comprehensive level. There is no dumbing down of facts, even at the cost of appearing ‘too technical’ at times.
Here are some examples of complex systems: beehives; ant colonies; self-organized supramolecular assemblies; ecosystems; spin-glasses and other complex materials; stock markets; economies of nations; the world economy; the global weather pattern. The origin and evolution of life on Earth was itself a series of emergent phenomena that occurred in highly complex systems. Evolution of complexity is generally a one-way traffic: The new emergent features may (in principle) be deducible from, but are not reducible to, those operating at the next lower level of complexity. Reductionism stands discounted.
As I said earlier, emergent behaviour is a hallmark of complex systems. Human intelligence is also an emergent property: Thoughts, feelings, and purpose result from the interactions among the neurons. Similarly, even memories are emergent phenomena, arising out of the interactions among the large number of ‘unmemory-like’ fragments of information stored in the brain.
What goes on in a complex system is essentially as follows: There is a large number of interacting agents, which may be viewed as forming a network. In the network-theory jargon, the agents are the ‘nodes’ of the network, and a line joining any two nodes (i.e., an ‘edge’) represents the interaction between that pair of agents. Any interaction amounts to communication or exchange of information. The action or behaviour of each agent is determined by what it ‘sees’ others doing, and its actions, in turn, determine what the other agents may do. Further, the term game-playing is used for this mutual interaction in the case of those complex systems in which the agents are ‘thinking’ organisms (particularly humans). Therefore a partial list of topics covered in this book is: information theory; network theory; cellular automata; game theory.
Exchange of information in complex systems, controlled like other macroscopic phenomena by the second law of thermodynamics, leads to self-organization and emergence. In particular, biological evolution is a natural and inevitable consequence of such ongoing processes, an additional factor for them being the cumulative effects of mutations and natural selection. This book has chapters on evolution of complexity of all types: cosmic, chemical, biological, artificial, cultural.
Networked or ‘webbed’ systems have the all-important nonlinearity feature. In fact, nonlinear response, in conjunction with substantial departure from equilibrium, is the crux of complex behaviour. There are many types of nonlinear systems. The most important for our purposes in this book are those in which, although the output (y) is indeed proportional to the input (x), the proportionality factor (m) is not independent of the input; i.e., m is not a constant factor, but rather varies with what x is. For a linear system we have y = m x, with m having a fixed value, not varying with x. But for a nonlinear system, the equation becomes y = m(x) x; now m is not a constant. This has far-reaching consequences for the (always networked) complex system. In particular, its future progression of events is very sensitive to conditions at any particular point of time (the so-called ‘initial conditions’). This sensitivity to initial conditions is also the hallmark of chaotic systems. In fact, there is a well-justified viewpoint that it is impossible to discuss several types of complex systems without bringing in concepts from chaos theory. And, what is more, complex systems tend to evolve to a configuration wherein they can operate near the so-called edge of chaos (neither too much order, nor too much chaos). There is a chapter on chaos which elaborates on these things.
Inanimate systems can also be complex. Whirlpools and whirlwinds are familiar examples of dynamic nonbiological complex systems. Even static physical systems like some nanocomposites may exhibit properties that cannot always be deduced from those of the constituents of the composite. A particularly fascinating class of complex materials are the so-called multiferroics. A multiferroic is actually a ferroic crystalline material (a ‘natural’ composite) which just refuses to be homogeneous over macroscopic length scales, so that the same crystal may be, say, ferroelectric in some part, and ferromagnetic in another. In a multiferroic, two or all three of the electric, magnetic and elastic interactions compete in a delicately balanced manner, and even a very minor local factor can tilt the balance in favour of one or the other. This class of materials offers great scope for basic research and for device applications, particularly in smart structures.
The current concern about ecological conservation and global warming points to the need for a good understanding of complex systems, particularly their holistic nature. Mother Earth is a single, highly complex, system, now increasingly referred to as the System Earth.
A better understanding of complexity may well become a matter of life and death for the human race. And the subject of complexity science is still at the periphery of science. It has not yet become mainstream, in the sense that it is not taught routinely even at the college level. That cannot go on.
There are already a substantial number of great books on complexity science, and I have drawn on them. But I believe that this book is student-friendly and teacher-friendly, and it brings home the all-pervasive nature of the subject. Here are its salient features:
1. It provides a comprehensive update on the subject.
2. It can serve as introductory or supplementary reading for an undergraduate or graduate course on any branch of complexity science.
3. Practically all the mathematical treatment of the subject has been pushed to the appendices at the end of the book, so the main text can be comprehended even by those who are not too comfortable with equations. This is important because a large fraction of the educated public must get the hang of the nature of complexity, so that we can successfully meet the challenges posed to our very survival as a species.
4. Both among scientists and nonscientists there is a large proportion of people who are insufficiently trained about the explaining power of complexity science when it comes to some of the deepest puzzles of Nature and, hopefully, this book would help remedy the situation to some extent.
5. The book has a certain all-under-one-roof character. The topics covered are so many and so diverse that it would be well-nigh impossible for a reader, specializing in a particular branch of complexity science, not to get exposed to what is going on in the rest of complexity science! This is important, because using the insights gained in one complex system for trying to understand another complex system is the hallmark of complexity science.
6. A proper understanding of what complexity science has already achieved will also help discredit many of the claims of mystics, supernaturalists, and pseudoscientists.
September, 2017
I. Complexity Basics
1. Overview 1
1.1 Preamble 3
1.2 A whirlpool as an example of self-organization 5
1.3 Spontaneous pattern formation: the Bénard instability 6
1.4 Recent history of investigations in complexity science 8
1.5 Organization of the book 8
2. The Philosophical and Computational Underpinnings of Complexity Science 9
2.1 The scientific method for understanding natural phenomena 9
2.2 Reductionism and its inadequacy for dealing with complexity 12
2.3 The Laplace demon 13
2.4 Holism 15
2.5 Emergence 16
2.6 Scientific determinism, effective theories 17
2.7 Free will 18
2.8 Actions, reactions, interactions, causality 21
2.9 The nature of reality 23
3. The Second Law of Thermodynamics 25
3.1 The second law for isolated systems 25
3.2 Entropy 26
3.3 The second law for open systems 27
3.4 Nucleation and growth of a crystal 29
3.5 The second law is an emergent law 32
3.6 Emergence, weak and strong 33
3.7 Nature abhors gradients 33
3.8 Systems not in equilibrium 34
3.9 Thermodynamics of small systems 35
4. Dynamical Evolution 37
4.1 Dynamical systems 37
4.2 Phase-space trajectories 37
4.3 Attractors in phase space 38
4.4 Nonlinear dynamical systems 40
4.5 Equilibrium, stable and unstable 40
4.6 Dissipative structures and processes 42
4.7 Bifurcations in phase space 43
4.8 Self-organization and order in dissipative structures 44
5.Relativity Theory and Quantum Mechanics 47
5.1.Special theory of relativity 47
5.2 General theory of relativity 49
5.3 Quantum mechanics 52
5.4 Summing over multiple histories 55
6.The Nature of Information 57
6.1 Russell’s paradox 57
6.2 Hilbert’s formal axiomatic approach to mathematics 58
6.3 Gödel’s incompleteness theorem 59
6.4 Turing’s halting problem 60
6.5 Elementary information theory 63
6.6 Entropy means unavailable or missing information 65
6.7 Algorithmic information theory 66
6.8 Algorithmic probability and Ockham’s razor 69
6.9 Algorithmic information content and effective complexity 70
6.10 Classification of problems in terms of computational complexity70
6.11 ‘Irreducible complexity’ deconstructed 71
7.Darwinian Evolution, Complex Adaptive Systems, Sociobiology 75
7.1 Darwinian evolution 75
7.2 Complex adaptive systems 77
7.3 The inevitability of emergence of life on Earth 79
7.4.Sociobiology, altruism, morality, group selection 81
8. Symmetry is Supreme 83
8.1 Of socks and shoes 83
8.2 Connection between symmetry and conservation laws 83
8.3 Why so much symmetry? 84
8.4 Growth of a crystal as an ordering process 85
8.5 Broken symmetry 86
8.6 Symmetry aspects of phase transitions 88
8.7 Latent symmetry 89
8.8 Latent symmetry and the phenomenon of emergence in complex systems 90
8.9 Broken symmetry and complexity 91
8.10 Symmetry of complex networks 92
9. The Standard Model of Particle Physics 95
9.1 The four fundamental interactions 95
9.2 Bosons and fermions 96
9.3 The standard model and the Higgs mechanism 98
10. Cosmology Basics 101
10.1 The ultimate causes of all cosmic order and structure 101
10.2 The Big Bang and its aftermath 102
10.3 Dark matter and dark energy 105
10.4 Cosmic inflation 108
10.5 Supersymmetry, string theories, M-theory 109
10.6 Has modern cosmology got it all wrong? 111
11. Uncertainty, Complexity, and the Arrow of Time 117
11.1 Irreversible processes, and not entropy, determine the arrow of time 117
11.2 Irreversible processes can lead to order 117
11.3 The arrow of time and the early universe 118
11.4 When did time begin? 119
11.5 Uncertainty and complex adaptive systems 120
12. The Cosmic Evolution of Complexity 123
12.1 Our cosmic history 123
12.2 We are star stuff 124
13. Why Are the Laws of Nature What They Are? 127
13.1 The laws of Nature in our universe 127
13.2 The anthropic principle 128
14. The Universe is a Quantum Computer 131
14.1 Quantum computation 131
14.2 Quantum entanglement 132
14.3 The universe regarded as a quantum computer 133
15. Chaos, Fractals, and Complexity 135
15.1 Nonlinear dynamics 135
15.2 Extreme sensitivity to initial conditions 136
15.3 Chaotic rhythms of population sizes 137
15.4 Fractal nature of the strange attractor 139
15.5 Chaos and complexity 141
16. Cellular Automata as Models of Complex Systems 143
16.1 Cellular automata 143
16.2 Conway’s Game of Life 143
16.3 Self-reproducing automata 145
16.4 The four Wolfram classes of cellular automata 146
16.5 Universal cellular automata 147
17. Wolfram’s ‘New Kind of Science’ 149
17.1 Introduction 149
17.2 Wolfram’s principle of computational equivalence (PCE) 150
17.3 The PCE and the rampant occurrence of complexity 151
17.4 Why does the universe run the way it does? 152
17.5 Criticism of Wolfram’s NKS 153
18. Swarm Intelligence 157
18.1 Emergence of swarm intelligence in a beehive 157
18.2 Ant logic 159
18.3 Positive and negative feedback in complex systems 160
19. Nonadaptive Complex Systems 163
19.1 Composite materials 163
19.2 Ferroic materials 163
19.3 Multiferroics 164
19.4 Spin glasses 165
19.5 Relaxor ferroelectrics 166
19.6 Relaxor ferroelectrics as vivisystems 167
20. Self-Organized Criticality, Power Laws 169
20.1 The sandpile experiment 169
20.2 Power-law behaviour and complexity 170
20.3 Robust and nonrobust criticality 173
21. Characteristics of Complex Systems 175
II. Pre-Human Evolution of Complexity
22. Evolution of Structure and Order in the Cosmos 183
22.1 The three eras in the cosmic evolution of complexity 183
22.2 Chaisson’s parameter for quantifying the degree of complexity 183
22.3 Cosmic evolution of information 184
22.4 Why so much terrestrial complexity? 186
23. The Primary and Secondary Chemical Bonds 187
23.1 The primary chemical bonds 187
23.2 The secondary chemical bonds 189
23.3 The hydrogen bond and the hydrophobic interaction 190
24. Cell Biology Basics 193
25. Evolution of Chemical Complexity 197
25.1 Of locks and keys in the world of molecular self-assembly 197
25.2 Self-organization of matter 199
25.3 Emergence of autocatalytic sets of molecules 202
25.4 Positive feedback, pattern formation, emergent phenomena 204
25.5 Pattern formation: the BZ reaction 205
26. What is Life? 207
26.1 Schrödinger and life 207
26.2 Koshland’s ‘seven pillars of life’ 209
27. Models for the Origins of Life 211
27.1 The early work 211
27.2 The RNA-world model for the origin of life 213
27.3 Dyson’s proteins-first model for the origins of life 215
27.4 Why was evolution extremely fast for the earliest life? 218
28. Genetic Regulatory Networks and Cell Differentiation 219
28.1 Circuits in genetic networks 220
28.2 Kauffman’s work on genetic regulatory networks 221
29. Ideas on the Origins of Species: From Darwin to Margulis 223
29.1 Darwinism and neo-Darwinism 223
29.2 Biological symbiosis and evolution 225
29.3 What is a species 227
29.4 Horizontal gene transfer in the earliest life forms 228
29.5 Epigenetics 229
30. Coevolution of Species 231
30.1 Punctuated equilibrium in the coevolution of species 231
30.2 Evolutionarily stable strategies 232
30.3 Of hawks and doves in the logic of animal conflicts 234
30.4 Evolutionary arms races and the life-dinner principle 236
31. The Various Energy Regimes in the Evolution of Our Ecosphere 241
31.1 The thermophilic energy regime 242
31.2 The phototrophic energy regime 244
31.3 The aerobic energy regime 245
III. Humans and the Evolution of Complexity
32. Evolution of Niele’s Energy Staircase After the Emergence of Humans 249
32.1 The pyrocultural energy regime 249
32.2 The agrocultural energy regime 251
32.3 The carbocultural energy regime 252
32.4 The green-valley approach to System Earth 253
32.5 The imperial approach to System Earth 254
32.6 A nucleocultural energy regime? 256
32.7 A possible ‘heliocultural’ energy regime 258
33. Computational Intelligence 261
33.1 Introduction 261
33.2 Fuzzy logic 262
33.3 Neural networks, real and artificial 263
33.4 Genetic algorithms 265
33.5 Genetic programming: Evolution of computer programs 267
33.6 Artificial life 271
34. Adaptation and Learning in Complex Adaptive Systems 273
34.1 Holland’s model for adaptation and learning 273
34.2 The bucket brigade in Holland’s algorithm 274
34.3 Langton’s work on adaptive computation 276
34.4 The edge-of-chaos existence of complex adaptive systems 278
35. Smart Structures 281
35.1 The three main components of a smart structure 281
35.2 Reconfigurable computers and machines that can evolve 283
36. Robots and Their Dependence on Computer Power 287
36.1 Behaviour-based robotics 287
36.2 Evolutionary robotics 288
36.3 Evolution of computer power per unit cost 290
37. Machine Intelligence 295
37.1 Artificial distributed intelligence 295
37.2 Evolution of machine intelligence 296
37.3 The future of intelligence and the status of humans 298
38. Evolution of Language 303
39. Memes and Their Evolution 307
40. Evolution of the Human Brain, and the Nature of Our Neocortex 311
40.1 Evolution of the brain 312
40.2 The human neocortex 313
40.3 The history of intelligence 315
41. Minsky’s and Hawkins’ Models for how Our Brain Functions 319
41.1 Marvin Minsky’s ‘Society of Mind’ 319
41.2 Can we make decisions without involving emotions? 320
41.3 Hawkins’ model for intelligence and consciousness 323
42. Inside the Human Brain 325
42.1 Probing the human Brain 325
42.2 Peering into the human brain 327
43. Kurzweil’s Pattern-Recognition Theory of Mind 331
44. The Knowledge Era and Complexity Science 337
44.1 The wide-ranging applications of complexity science 337
44.2 Econophysics 338
44.3 Application of complexity-science ideas in management science 341
44.4 Cultural evolution and complexity transitions 343
44.5 Complexity leadership theory 345
44.6 Complexity science in everyday life 345
45. Epilogue 347
IV. Appendices
A1. Equilibrium Thermodynamics and Statistical Mechanics 357
A1.1 Equilibrium thermodynamics 357
A1.2 Statistical mechanics 360
A1.3 The ergodicity hypothesis 360
A1.4 The partition function 361
A1.5 Tsallis thermodynamics of small systems 361
A2. Probability Theory 365
A2.1 The notion of probability 365
A2.2 Multivariate probabilities 365
A2.3 Determinism and predictability 367
A3. Information and Uncertainty 369
A3.1 Information theory 369
A3.2 Shannon’s formula for a numerical measure of information 370
A3.3 Shannon entropy and thermodynamic entropy 371
A3.4 Uncertainty 372
A3.5 Algorithmic information theory 373
A4. Thermodynamics and Information 375
A4.1 Entropy and information 375
A4.2 Kolmogorov-Sinai entropy 376
A4.3 Mutual information and redundancy of information 377
A5. Systems Far from Equilibrium 379
A5.1 Emergence of complexity in systems far from equilibrium 379
A5.2 Nonequilibrium classical dynamics 380
A5.3 When does the Newtonian description break down? 383
A5.4 Generalization of Newtonian dynamics 384
A5.5 Pitchfork bifurcation 386
A5.6 Extension of Newton’s laws 386
A6. Quantum Theory and Particle Physics 389
A6.1 Introduction 389
A6.2 The Heisenberg uncertainty principle 389
A6.3 The Schrödinger equation 390
A6.4 The Copenhagen interpretation 391
A6.5 Time asymmetry 391
A6.6 Multiple universes 391
A6.7 Feynman’s sum-over-histories formulation 392
A6.8 Quantum Darwinism 393
A6.9 Gell-Mann’s coarse-graining interpretation 393
A6.10 Poincaré resonances and quantum theory 394
A6.11 Model-dependent realism, intelligence, existence 396
A6.12 The principle of conservation of quantum information 397
A6.13 Particle physics 398
A7. Theory of Phase Transitions and Critical Phenomena 401
A7.1 A typical phase transition 401
A7.2 Liberal definitions of phase transitions 401
A7.3 Instabilities can cause phase transitions 402
A7.4 Order parameter of a phase transition 403
A7.5 The response function corresponding to the order parameter 404
A7.6 Phase transitions near thermodynamic equilibrium 404
A7.7 The Landau theory of phase transitions 405
A7.8 Spontaneous breaking of symmetry 407
A7.9 Field-induced phase transitions 407
A7.10 Ferroic phase transitions 408
A7.11 Prototype symmetry 409
A7.12 Critical phenomena 409
A7.13 Universality classes and critical exponents 410
A8. Chaos Theory 413
A8.1 The logistic equation 413
A8.2 Lyapunov exponents 416
A8.3 Divergence of neighbouring trajectories 417
A8.4 Chaotic attractors 419
A9. Network Theory and Complexity 421
A9.1 Graphs 421
A9.2 Networks 425
A9.3 The travelling-salesman problem 426
A9.4 Random networks 427
A9.5 Percolation transitions in random networks 428
A9.6 Small-world networks 429
A9.7 Scale-free networks 431
A9.8 Evolution of complex networks 432
A9.9 Emergence of symmetry in complex networks 433
A9.10 Chua’s cellular nonlinear networks as a paradigm for emergence and complexity 435
A10. Game Theory 439
A10.1 Introduction 439
A10.2 Dual or two-player games 442
A10.3 Noncooperative games 449
A10.4 Nash equilibrium 450
A10.5 Cooperative games 450
Bibliography 453
Index 481
Acknowledgements 491
About the Author 492
NOTE ADDED ON 13th September 2017
The second edition of this book was published today. A number of corrections and other improvements have been incorporated. The font size has been reduced by 10%. New information has been added, and some less relevant material has been removed. Accordingly, the list of contents etc. underwent some minor changes, which have been incorporated in this blog post. |
249fbdff3d2082f3 | Physics Friday 60
Let us consider a hydrogen atom: a single electron “orbiting” a single (much heavier) nucleus containing a single proton. The potential energy due to the attractive Coulomb force between these charged particles is , where r is the distance between the particles. The time-independent Schrödinger equation for the electron (ignoring relativistic effects, particle spins, and magnetic moments) is
(Here, the electron mass m should actually be the reduced mass μ of the electron-proton pair; however, the correction involved is small, and even smaller for more massive atomic nuclei). With the way our potential energy is defined (with zero energy at infinite separation), our bound states for the electron (our states of interest for an atom) will have E<0.
Now, we note that the potential is spherically symmetric, and so our Hamiltonian commutes with the angular momentum operators, and we can perform separation of variables in spherical coordinates, with the angular components being the spherical harmonics (see here and here).
As in here, when we perform the spherical coordinate separation , we obtain radial equation
Making the substitution ,
Now, let us define , which has units of length, and dimensionless variable , so that . Then we have
now, let us make the transform (ρ is dimensionless); then we get
We see that as ρ→∞, the equation is approximated by
, which has general solution ; normalizability requires C2=0. Also, examining ρ→0, our equation is approximately , which has general solution ; considering the origin tells us C2=0 again. Combining these, we thus try the substitution , giving radial equation
Now, letting , we get
Which is the associated Laguerre differential equation
with and ; so the solution to our transformed equation is
, where is an associated Laguerre function. Now, the resulting wavefunction can be normalized only if is a polynomial; this is true when is a non-negative integer; thus, we have , where n is a positive integer, and 0≤ln-1. Looking back through our conversions,
we have
where , is the constant needed to normalize the wavefunction; with some work involving the properties of associated Laguerre polynomials (see for example here), we can find the normalized wavefunction to be:
Where n=1,2,3,…, l=0,1,2,…,n-1, and m=-l, –l+1,…,l-1,l (for a given n, we have n2 different angular momentum states).
One should note that is the Bohr radius.
Now, recall that we had ; with the normalization condition , we have
, and the ground state energy of hydrogen is approximately -13.6 eV, and so the ionization energy of hydrogen is approximately 13.6 eV.
Tags: , , , , , , , , ,
Leave a Reply
WordPress.com Logo
Twitter picture
Facebook photo
Google+ photo
Connecting to %s
%d bloggers like this: |
b0b839e79b7a2025 | söndag 24 februari 2013
2nd Coming of the 2nd Law
The 2nd Law of Thermodynamics has remained as a main mystery of physics ever since it was first formulated by Clausius in 1865 as non-decrease of entropy, despite major efforts by mathematical physicists to give it a rational understandable meaning.
The view today is, based on the work by Ludwig Boltzmann, that the 2nd Law is a statistical law expressing a lack of precise human knowledge of microscopic physics, rather than a physical law independent of human observation and measurement. This view prepared the statistical interpretation of quantum mechanics as the basis of modern physics.
Modern physics is thus focussed on human observation of realities, while classical physics concerns realities independent of human observation. To involve the observer into the observed makes physics subjective which means a depart from the essence of physics of objectivity. A 2nd Law based on statistics thus comes along with many difficulties, which ended Boltzmann's life, and it is natural to seek a formulation in terms of classical physics without statistics.
Such a formulation is given in Computational Thermodynamics based on the Euler equations for an ideal compressible gas solved by finite precision computation. In this formulation the 2nd Law is a consequence of the following equations expressing conservation of kinetic energy K and internal (heat) energy E:
• dK/dt = W - D
• dE/dt = - W + D
• D > = 0,
where W is work and D is nonnegative turbulent dissipation (rates). The crucial element is the turbulent dissipation rate D which is non-negative, and thus signifies one-way transfer of energy from kinetic energy K into heat energy E.
The work W, positive in expansion and negative in compression, allows a two-way transfer between K and E, while turbulent diffusion D >= 0 can only transfer kinetic energy K into heat energy E, and not the other way.
We compare dE/dt = - W + D or rewritten as dE/dt + W = D as an alternative formulation of the 2nd Law, with the classical formulation found in books on thermodynamics:
• dE + pdV = TdS = dQ
• dS > = 0,
where p is pressure, V is volume (with pdV corresponding to W), T is temperature, S is entropy and dQ added heat energy.
We see that D >= 0 expresses the same relation as dS >= 0 since T > 0, and thus the alternative formulation expresses the same effective physics as the classical formulation.
The advantage of the alternative formulation is that turbulent dissipation rate D with D >= 0 has a direct physical meaning, while the physical meaning of S and dS >= 0 has remained a mystery.
The alternative formulation thus gives a formulation in terms of physical quantities without any need to introduce a mysterious concept of entropy, which cannot decrease for some mysterious reason. A main mystery of science can thus be put into the wardrobe of mysteries without solution and meaning, together with phlogistons.
Notice the connection to Computational Blackbody Radiation with an alternative proof of Planck's radiation law with again statistics replaced by finite precision computation.
For a recent expression of the confusion and mystery of the 2nd Law, see Ludwig Boltzmann: a birthday by Lubos.
PS1 The reason to define S by the relation dE + pdV = TdS is that for an ideal gas with pV = RT this makes dS = dE/T + pdV/T an exact differential, thus defining S in terms of T and p. The trouble with S thus defined, is that it lacks direct physical meaning.
PS2 Lubos refers to Bohr's view of physics:
• There is no quantum world. There is only an abstract physical description. It is wrong to think that the task of physics is to find out how nature is. Physics concerns what we can say about nature...
This idea has ruined modern physics by encouraging a postmodern form medieval mysticism away from rational objectivity as the essence of science, where the physical world is reduced to a phantasm in the mind of the observer busy counting statistics of non-physical micro states.
PS3 Recall that statistics was introduced by Boltzmann to give a mathematical proof of the 2nd law, which appeared to be impossible using reversible Newtonian micromechanics, followed by Planck to prove his law of radiation, followed by Born to give the multidimensional Schrödinger equation an interpretation. But this was overkill. It is possible to prove a 2nd law and law of radiation using instead of full statistics a concept of finite precision computation as shown in Computational Thermodynamics and Computational Blackbody Radiation, which maintains the rationalism and objectivity of classical mechanics, while avoiding the devastating trap of reversible micromechanics.
24 kommentarer:
1. Exactly where is the confusion and mystery in the last link you give to the blog-post by Motl?
Reading the post it lucidly explains where the confusion about the second law originates (you seem to run into this trap your self looking at your presented theory) and it dispels any mystery. Or it certainly will in near future, if one look at one of the first comments, coarse-graining is mentioned and the original writer mention a will to write a specific blog-post touching upon this.
Oh, and a final question. How does your version of the second law work in a solid?
2. Friction is for a solid friction what corresponds to turbulent diffusion for a gas. The confusion with statistics is that statistics is done by humans but not by physical objects. If you believe in physics independent of human observation, statistics is not an option. If you think that you are the center of the universe, statistics is fine.
3. Did you read the blog-post that you refer to?
4. The post is a repetition of standard statistical mechanics, which in my opinion is not physics.
5. That was the point...
Statistical mechanics are neither confusing or mysterious.
That you doesn't consider it to be physics is more your loss then those who use it as an indispensable tool in describing and predicting nature.
6. Yes, maybe it reflects a limitation of my mind. But all that glitters in mechanics is not gold, according to Schrödinger and Einstein...
7. In your opinion, what kind of theory should one use on length and time scales where continuum mechanics breaks down? What is the physical meaning of D in those regimes?
The theory you present here looks exactly as the second law defined from a dissipation function originating from mass, momentum and energy conservation in a fluid. This is in the curriculum in an ordinary master level course in continuum mechanics. What is new?
8. A proper version of the Schrödinger equation may be used. This is also a continuum model and as such subject to the presence of D as a reflection of impossibility to exactly satisfy the conservation law expressed by Schrödingers equation, which reflects the ,inevitable appearance of turbulence in systems with many components/particles/atoms. The novelty in our approach is to give the dissipation a meaning as reflecting an impossibility of satisfying the exact conservation laws in finite precision computation/finite precision physics in which by necessity local mean values will
be taken, which is the essence of turbulence. This also gives a way to understand turbulence as shown in my book Computational Turbulent Incompressible Flow. To simply assume positive dissipation/friction without explaining from where the dissipation/friction is coming, does not answer the key question why there is a 2nd law and why there is dissipation/friction. This is what I seek to do. It can be viewed as a very primitive form of statistics without the drawbacks of statistical mechanics with its horrendous calculations of number of microstates. The irreversibility of smashing an egg is then explained as an impossibility to realize the high precision required in finite time.
9. So you do acknowledge that statistics is unavoidable?
Further, there seems to be a lot of unproven assumptions and loose ends. Have you tried this approach on nano-systems? Does it work if this is applied to all the empirical data connected to nano-science?
10. Claes, I do not disagree with your assessment. The reason I pointed to Lubos's post was that he clearly determines using statistics that heat flows one way as does time and DLR can not occur in the macrostate if the atmospheric temperature is less than the surface temperature because in his terms that would decrease entropy.
I find Lubos's string theory interesting. I think he is very knowledgeable in his field but I recognise that he has no experience in engineering science such as heat & mass transfer, or fluid dynamics. For example he makes some incorrect assumption in his post about Venus, http://motls.blogspot.com.au/2010/05/hyperventilating-on-venus.html, but a least he comes up with the answer that there is no significant "greenhouse effect" on Venus. His post and others have discredited the gurus of AGW such as Sir John Houghton (who included a greenhouse Venus in his poor quality book "The Physics of Atmospheres 1986 2nd edition)
11. No, it is snot tatistics but finite precision computation, which is like chopping decimals up or down, which is a very primitive and hence understandable form of statistics, very different from counting number of microstates. Finite precision computation gives classical deterministic models a new life to the benefit of mankind. Applications are endless.
12. In my view statistics is not real objective physics but rather subjective physics of the mind of the observer, which at least for macroscopic physics is against the basic principle of science of objectivity and repeatability.
13. If you come to rhe same result, as Boltzmann and Planck, with your finite precision computation methode, what´s then the problem. That´s very good I think. The result is even more true the more ways you can prove it in. But with your way to calculate things, you can never proove the nonexistance of DLR, how much you will try. Planck`s law tells it exist and furthermore Kirchoffs radiation law tells that DLR can be absorbed by materia at the surface of the earth. If you say something else, you are denying these laws. Are you?
14. DLR violates the 2nd law, and thus cannot exist as a physical phenomenon, only as a phantasm in twisted minds.
15. Then Planck´s and Kirchoff´s radiation laws violates the 2nd law! Do you really believe that?
16. DLR violates the 2nd Law. If Planck was alive he could tell if he insists that his radiation law includes DLR. In his absence we have to think ourselves, and this is what I have done.
17. Claes,
you write: "DLR violates the 2nd Law."
Why? DLR is a consequence of the temperature of the atmosphere.
So why is it violating the 2nd law?
Can you explain this without repeating just your claim.
Best regards
18. Transfer of heat energy from cold to warm without external forcing violates the 2nd law.
19. (I repeat:)
CO2 (in atmosphere) radiates at 667. Also, acc to Planck´s law, the earth radiation spectrum contains 667. Then Kirchoff´s law tells that the earth can absorb this radiation. Do you deny this?
20. I repeat: If you are convinced that non-forced heat transfer from cold to warm is possible, I suggest that you present your ideas to Vattenfall and study the reaction, instead of bombarding me with silly questions.
21. Claes, I don´t say that heat is transferred from CO2 to the earth. I just say that, in the first step, radiation from CO2 evidently, acc to Planck´s and Kirchoff´s laws, is absorbed by the earth. In the next step, of course, the earth is reemitting this energy, or more, back to some receiver, acc to the same laws. This means that the earth can´t be directly warmed by a colder materia, like CO2, but DLR from a colder body exists. Is this so hard to understand. This also means that the 2nd law is not applicable in the first step above, but after the second step. The 2nd law is best suited for macroscopic systems. It was invented before the atomic behaviour was fully clear.
I don´t think my view on things is more silly than yours, rather the opposite. Can you agree on something I have written here.
22. Lasse H, it seems you do not want to understand or open your closed thinking. I have suggested to you to read chapter 4-Thermodynamics and 5-Heat& Mass transfer of Perry's Chemical Engineering Handbook which has been in existence since 1934 with many revisions and editions to keep it upto date. Mark's Mechanical Engineering Handbook has similar sections but less detailed about the work of Prof Hoyt Hottel who carried out a vast amount of research on the absorption and emission of gases (from combustion) containing water vapor and carbon dioxide. Clearly you do not understand Kirchoff's law, or radiative & convective heat transfer (are you aware of the Nusselt number?- I think not). What you say above is wrong. You are just repeating the nonsense spread by alarmists who a) have no qualifications in engineering science and b) have had no experience of design and measurement of combustion and heat transfer systems and equipment.
23. Cementafriend: Planck´s and Kirchoff´s laws speak for themselves. I have not invented them.
24. The Uranus Dilemma
Consideration of the planet Uranus very clearly indicates that radiative models (and any type of "Energy Budget" similar to those produced by the IPCC) can never be used to explain observed temperatures on Uranus. We can deduce that there must be some other physical process which transfers some of the energy absorbed in the upper levels of the Uranus atmosphere from the meagre 3W/m^2 of Solar radiation down into its depths, and that same mechanism must "work" on all planets with significant atmospheres.
Uranus is an unusual planet in that there is no evidence of any internal heat generation. Yet, as we read in this Wikipedia article, the temperature at the base of its (theoretical) troposphere is about 320K - quite a hot day on Earth. But it gets hotter still as we go further down in an atmosphere that is nearly 20,000Km in depth. Somewhere down there it is thought that there is indeed a solid core with about half the mass of Earth. The surface of that mini Earth is literally thousands of degrees. And of course there's no Solar radiation reaching anywhere near that depth.
So how does the necessary energy get down there, or even as far as the 320K base of the troposphere? An explanation of this requires an understanding of the spontaneous process described in the Second Law of Thermodynamics, which is stated here as ...
"The second law of thermodynamics: An isolated system, if not already in its state of thermodynamic equilibrium, spontaneously evolves towards it. Thermodynamic equilibrium has the greatest entropy amongst the states accessible to the system"
Think about it, and I'll be happy to answer any questions - and explain what actually happens, not only on Uranus, Venus, Jupiter etc, but also on Earth. |
4064fb5c3572d839 | Correcting the U(1) error in the Standard Model of particle physics
Fundamental particles in the SU(2)xU(1) part of the Standard Model
Above: the Standard Model particles in the existing SU(2)xU(1) electroweak symmetry group (a high-quality PDF version of this table can be found here). The complexity of chiral symmetry – the fact that only particles with left-handed spins (Weyl spinors) experience the weak force – is shown by the different effective weak charges for left and right handed particles of the same type. My argument, with evidence to back it up in this post and previous posts, is that there are no real ‘singlets’: all the particles are doublets apart from the gauge bosons (W/Z particles) which are triplets. This causes a major change to the SU(2)xU(1) electroweak symmetry. Essentially, the U(1) group which is a source of singlets (i.e., particles shown in blue type in this table which may have weak hypercharge but have no weak isotopic charge) is removed! An SU(2) symmetry group then becomes a source of electric and weak hypercharge, as well as its existing role in Standard Model as a descriptor of the isotopic spin. It modifies the role of the ‘Higgs bosons’: some such particles are still be required to give mass, but the mainstream electroweak symmetry breaking mechanism is incorrect.
There are 6 rather than 4 electroweak gauge bosons, the same 3 massive weak bosons as before, but 2 new charged massless gauge bosons in addition to the uncharged massless ‘photon’, B. The 3 massless gauge bosons are all massless counterparts to the 3 massive weak gauge bosons. The ‘photon’ is not the gauge boson of electromagnetism because, being neutral, it can’t represent a charged field. Instead, the ‘photon’ gauge boson is the graviton, while the two massless gauge bosons are the charged exchange radiation (gauge bosons) of electromagnetism. This allows quantitative predictions and the resolution of existing electromagnetic anomalies (which are usually just censored out of discussions).
It is the U(1) group which falsely introduces singlets. All Standard Model fermions are really doublets: if they are bound by the weak force (i.e., left-handed Weyl spinors) then they are doublets in close proximity. If they are right-handed Weyl spinors, they are doublets mediated by only strong, electromagnetic and gravitational forces, so for leptons (which don’t feel the strong force), the individual particles in a doublet can be located relatively far from another (the electromagnetic and gravitational interactions are both long-range forces). The beauty of this change to the understanding of the Standard Model is that gravitation automatically pops out in the form of massless neutral gauge bosons, while electromagnetism is mediated by two massless charged gauge bosons, which gives a causal mechanism that predicts the quantitative coupling constants for gravity and electromagnetism correctly. Various other vital predictions are also made by this correction to the Standard Model.
Fundamental vector boson charges of SU(2)
Above: the fundamental vector boson charges of SU(2). For any particle which has effective mass, there is a black hole event horizon radius of 2GM/c2. If there is a strong enough electric field at this radius for pair production to occur (in excess of Schwinger’s threshold of 1.3*1018 v/m), then pairs of virtual charges are produced near the event horizon. If the particle is positively charged, the negatively charged particles produced at the event horizon will fall into the black hole core, while the positive ones will escape as charged radiation (see Figures 2, 3 and particularly 4 below for the mechanism for propagation of massless charged vector boson exchange radiation between charges scattered around the universe). If the particle is negatively charged, it will similarly be a source of negatively charged exchange radiation (see Figure 2 for an explanation of why the charge is never depleted by absorbing radiation from nearby pair production of opposite sign to itself; there is simply an equilibrium of exchange of radiation between similar charges which cancels out that effect). In the case of a normal (large) black hole or neutral dipole charge (one with equal and opposite charges, and therefore neutral as a whole), as many positive as negative pair production charges can escape from the event horizon and these will annihilate one another to produce neutral radiation, which produces the right force of gravity. Figure 4 proves that this gravity force is about 1040 times stronger than electromagnetism. Another earlier post calculates the Hawking black hole radiation rate and proves it creates the force strength involved in electromagnetism.
(For a background to the elementary basics of quantum field theory and quantum mechanics, like the Schroedinger and Dirac equations and their consequences, see the earlier post on The Physics of Quantum Field Theory. For an introduction to symmetry principles, see the previous post.)
The SU(2) symmetry can model electromagnetism (in addition to isospin) because it models two types of charges, hence giving negative and positive charges without the wrong method U(1) uses (where it specifies there are only negative charges, so positive ones have to be represented by negative charges going backwards in time). In addition, SU(2) gives 3 massless gauge bosons, two charged ones (which mediate the charge in electric fields) and one neutral one (which is the spin-1 graviton, that causes gravity by pushing masses together). In addition, SU(2) describes doublets, matter-antimatter pairs. We know that electrons are not produced individidually, only in lepton-antilepton pairs. The reason why electrons can be separated a long distance from their antiparticle (unlike quarks) is simply the nature of the binding force, which is long range electromagnetism instead of a short-range force.
Quantum field theory, i.e., the standard model of particle physics, is based mainly on experimental facts, not speculating. The symmetries of baryons give SU(3) symmetry, those of mesons give SU(2) symmetry. That’s experimental particle physics. The problem in the standard model SU(3)xSU(2)xU(1) is the last component, the U(1) electromagnetic symmetry. In SU(3) you have three charges (coded red, blue and green) and form triplets of quarks (baryons) bound by 32-1 = 8 charged gauge bosons mediating the strong force. For SU(2) you have two charges (two isospin states) and form doublets, i.e., quark-antiquark pairs (mesons) bound by 22-1 = 3 gauge bosons (one positively charged, one negatively charged and one neutral).
One problem comes when electromagnetism is represented by U(1) and added to SU(2) to form the electroweak unification, SU(2)xU(1). This means that you have to add a Higgs field which breaks the SU(2)xU(1) symmetry at low energy, by giving masses (at low energy only) to the 3 gauge bosons of SU(2). At high energy, the masses of those 3 gauge bosons must disappear, so that they are massless, like the photon assumed to mediate the electromagnetic force represented by U(1). The required Higgs field which adds mass in the right way for electroweak symmetry breaking to work in the Standard Model but adds complexity and isn’t very predictive.
The other, related, problem is that SU(2) only acts on left-handed particles, i.e., particles whose spin is described by a left-handed Weyl spinor. U(1) only has one electric charge, the electron. Feynman represents positrons in the scheme as electrons going backwards in time, and this makes U(1) work, but it has many problems and a massless version of SU(2) is the correct electromagnetism-gravitational model.
So the correct model for electromagnetism is really SU(2) which has two types of electric charge (positive and negative) and acts on all particles regardless of spin, and is mediated by three types of massless gauge bosons: negative ones for the fields around negative charges, positive ones for positive fields, and neutral ones for gravity.
The question then is, what is the corrected Standard Model? If we delete U(1) do we have to replace it with another SU(2) to get SU(3)xSU(2)xSU(2), or do we just get SU(3)xSU(2) in which SU(2) takes on new meaning, i.e., there is no symmetry breaking?
Assume the symmetry group of the universe is SU(3)xSU(2). That would mean that the new SU(2) interpretation has to do all the work and more of SU(2)xU(1) in the existing Standard Model. The U(1) part of SU(2)xU(1) represented both electromagnetism and weak hypercharge, while SU(2) represented weak isospin.
We need to dump the Higgs field as a source for symmetry breaking, and replace it with a simpler mass-giving mechanism that only gives mass to left-handed Weyl spinors. This is because the electroweak symmetry breaking problem has disappeared. We have to use SU(2) to represent isospin, weak hypercharge, electromagnetism and gravity. Can it do all that? Can the Standard Model be corrected by simply removing U(1) to leave SU(3)xSU(2) and having the SU(2) produce 3 massless gauge bosons (for electromagnetism and gravity) and 3 massive gauge bosons (for weak interactions)? Can we in other words remove the Higgs mechanism for electroweak symmetry breaking and replace it by a simpler mechanism in which the short range of the three massive weak gauge bosons distinguishes between electromagnetism (and gravity) from the weak force? The mass giving field only gives mass to gauge bosons that normally interact with left-handed particles. What is unnerving is that this compression means that one SU(2) symmetry is generating a lot more physics than in the Standard Model, but in the Standard Model U(1) represented both electric charge and weak hypercharge, so I don’t see any reason why SU(2) shouldn’t represent weak isospin, electromagnetism/gravity and weak hypercharge. The main thing is that because it generates the 3 massless gauge bosons, only half of which need to have mass added to them to act as weak gauge bosons, it has exactly the right field mediators for the forces we require. If it doesn’t work, the alternative replacement to the Standard Model is SU(3)xSU(2)xSU(2) where the first SU(2) is isospin symmetry acting on left-handed particles and the second SU(2) is electrogravity.
Mathematical review
Following from the discussion in previous posts, it is time to correct the errors of the Standard Model, starting with the U(1) phase or gauge invariance. The use of unitary group U(1) for electromagnetism and weak hypercharge is in error as shown in various ways in the previous posts here, here, and here.
The maths is based on a type of continuous group defined by Sophus Lie in 1873. Dr Woit summarises this very clearly in Not Even Wrong (UK ed., p47): ‘A Lie group … consists of an infinite number of elements continuously connected together. It was the representation theory of these groups that Weyl was studying.
‘A simple example of a Lie group together with a representation is that of the group of rotations of the two-dimensional plane. Given a two-dimensional plane with chosen central point, one can imagine rotating the plane by a given angle about the central point. This is a symmetry of the plane. The thing that is invariant is the distance between a point on the plane and the central point. This is the same before and after the rotation. One can actually define rotations of the plane as precisely those transformations that leave invariant the distance to the central point. There is an infinity of these transformations, but they can all be parametrised by a single number, the angle of rotation.
Not Even Wrong
Argand diagram showing rotation by an angle on the complex plane. Illustration credit: based on Fig. 3.1 in Not Even Wrong.
‘If one thinks of the plane as the complex plane (the plane whose two coordinates label the real and imaginary part of a complex number), then the rotations can be thought of as corresponding not just to angles, but to a complex number of length one. If one multiplies all points in the complex plane by a given complex number of unit length, one gets the corresponding rotation (this is a simple exercise in manipulating complex numbers). As a result, the group of rotations in the complex plane is often called the ‘unitary group of transformations of one complex variable’, and written U(1).
‘This is a very specific representation of the group U(1), the representation as transformations of the complex plane … one thing to note is that the transformation of rotation by an angle is formally similar to the transformation of a wave by changing its phase [by Fourier analysis, which represents a waveform of wave amplitude versus time as a frequency spectrum graph showing wave amplitude versus wave frequency by decomposing the original waveform into a series which is the sum of a lot of little sine and cosine wave contributions]. Given an initial wave, if one imagines copying it and then making the copy more and more out of phase with the initial wave, sooner or later one will get back to where one started, in phase with the initial wave. This sequence of transformations of the phase of a wave is much like the sequence of rotations of a plane as one increases the angle of rotation from 0 to 360 degrees. Because of this analogy, U(1) symmetry transformations are often called phase transformations. …
‘In general, if one has an arbitrary number N of complex numbers, one can define the group of unitary transformations of N complex variables and denote it U(N). It turns out that it is a good idea to break these transformations into two parts: the part that just multiplies all of the N complex numbers by the same unit complex number (this part is a U(1) like before), and the rest. The second part is where all the complexity is, and it is given the name of special unitary transformations of N (complex) variables and denotes SU(N). Part of Weyl’s achievement consisted in a complete understanding of the representations of SU(N), for any N, no matter how large.
‘In the case N = 1, SU(1) is just the trivial group with one element. The first non-trivial case is that of SU(2) … very closely related to the group of rotations in three real dimensions … the group of special orthagonal transformations of three (real) variables … group SO(3). The precise relation between SO(3) and SU(2) is that each rotation in three dimensions corresponds to two distinct elements of SU(2), or SU(2) is in some sense a doubled version of SO(3).’
Hermann Weyl and Eugene Wigner discovered that Lie groups of complex symmetries represent quantum field theory. In 1954, Chen Ning Yang and Robert Mills developed a theory of photon (spin-1 boson) mediator interactions in which the spin of the photon changes the quantum state of the matter emitting or receiving it via inducing a rotation in a Lie group symmetry. The amplitude for such emissions is forced, by an empirical coupling constant insertion, to give the measured Coulomb value for the electromagnetic interaction. Gerald ‘t Hooft and Martinus Veltman in 1970 argued that the Yang-Mills theory is renormalizable so the problem of running couplings having no limits can be cut off at effective limits to make the theory work (Yang-Mills theories use non-commutative algebra, usually called non-commutative geometry). The photon Yang-Mills theory is U(1). Equivalent Yang-Mills interaction theories of the strong force SU(3) and the weak force isospin group SU(2) in conjunction with the U(1) force result in the symmetry group SU(3) x SU(2) x U(1) which is the Standard Model. Here the SU(2) group must act only on left-handed spinning fermions, breaking the conservation of parity.
Dr Woit’s Not Even Wrong at pages 98-100 summarises the problems in the Standard Model. While SU(3) ‘has the beautiful property of having no free parameters’, the SU(2)xU(1) electroweak symmetry does introduce two free parameters: alpha and the mass of the speculative ‘Higgs boson’. However, from solid facts, alpha is not a free parameter but the shielding ratio of the bare core charge of an electron by virtual fermion pairs being polarized in the vacuum and absorbing energy from the field to create short range forces:
“This shielding factor of alpha can actually obtained by working out the bare core charge (within the polarized vacuum) as follows. Heisenberg’s uncertainty principle says that the product of the uncertainties in momentum and distance is on the order h-bar. The uncertainty in momentum p = mc, while the uncertainty in distance is x = ct. Hence the product of momentum and distance, px = (mc).(ct) = Et where E is energy (Einstein’s mass-energy equivalence). Although we have had to assume mass temporarily here before getting an energy version, this is just what Professor Zee does as a simplification in trying to explain forces with mainstream quantum field theory (see previous post). In fact this relationship, i.e., product of energy and time equalling h-bar, is widely used for the relationship between particle energy and lifetime. The maximum possible range of the particle is equal to its lifetime multiplied by its velocity, which is generally close to c in relativistic, high energy particle phenomenology. Now for the slightly clever bit:
px = h-bar implies (when remembering p = mc, and E = mc2):
x = h-bar /p = h-bar /(mc) = h-bar*c/E
so E = h-bar*c/x
when using the classical definition of energy as force times distance (E = Fx):
F = E/x = (h-bar*c/x)/x
= h-bar*c/x2.
“So we get the quantum electrodynamic force between the bare cores of two fundamental unit charges, including the inverse square distance law! This can be compared directly to Coulomb’s law, which is the empirically obtained force at large distances (screened charges, not bare charges), and such a comparison tells us exactly how much shielding of the bare core charge there is by the vacuum between the IR and UV cutoffs. So we have proof that the renormalization of the bare core charge of the electron is due to shielding by a factor of a. The bare core charge of an electron is 137.036… times the observed long-range (low energy) unit electronic charge. All of the shielding occurs within a range of just 1 fm, because by Schwinger’s calculations the electric field strength of the electron is too weak at greater distances to cause spontaneous pair production from the Dirac sea, so at greater distances there are no pairs of virtual charges in the vacuum which can polarize and so shield the electron’s charge any more.
“One argument that can superficially be made against this calculation (nobody has brought this up as an objection to my knowledge, but it is worth mentioning anyway) is the assumption that the uncertainty in distance is equivalent to real distance in the classical expression that work energy is force times distance. However, since the range of the particle given, in Yukawa’s theory, by the uncertainty principle is the range over which the momentum of the particle falls to zero, it is obvious that the Heisenberg uncertainty range is equivalent to the range of distance moved which corresponds to force by E = Fx. For the particle to be stopped over the range allowed by the uncertainty principle, a corresponding force must be involved. This is more pertinent to the short range nuclear forces mediated by massive gauge bosons, obviously, than to the long range forces.
“It should be noted that the Heisenberg uncertainty principle is not metaphysics but is solid causal dynamics as shown by Popper:
‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations, as I proposed [in the 1934 German publication, ‘The Logic of Scientific Discovery’]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation of quantum mechanics.’ – Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303. (Note: statistical scatter gives the energy form of Heisenberg’s equation, since the vacuum contains gauge bosons carrying momentum like light, and exerting vast pressure; this gives the foam vacuum effect at high energy where nuclear forces occur.)
“Experimental evidence:
“In particular:
As for the ‘Higgs boson’ mass that gives mass to particles, there is evidence there of its value. On page 98 of Not Even Wrong, Dr Woit points out:
‘Another related concern is that the U(1) part of the gauge theory is not asymptotically free, and as a result it may not be completely mathematically consistent.’
He adds that it is a mystery why only left-handed particles experience the SU(2) force, and on page 99 points out that: ‘the standard quantum field theory description for a Higgs field is not asymptotically free and, again, one worries about its mathematical consistency.’
Another thing is that the 9 masses of quarks and leptons have to be put into the Standard Model by hand together with 4 mixing angles to describe the interaction strength of the Higgs field with different particles, adding 13 numbers to the Standard Model which you want to be explained and predicted.
Important symmetries:
1. ‘electric charge rotation’ would transform quarks into leptons and vice-versa within a given family: this is described by unitary group U(1). U(1) deals with just 1 type of charge: negative charge, i.e., it ignores positive charge which is treated as a negative charge travelling backwards in time, Feynman’s fatally flawed model of a positron or anti-electron, and with solitary particles (which don’t actually exist since particles always are produced and annihilated as pairs). U(1) is therefore false when used as a model for electromagnetism, as we will explain in detail in this post. U(1) also represents weak hypercharge, which is similar to electric charge.
2. ‘isospin rotation’ would switch the two quarks of a given family, or would switch the lepton and neutrino of a given family: this is described by symmetry unitary group SU(2). Isospin rotation leads directly to the symmetry unitary group SU(2), i.e., rotations in imaginary space with 2 complex co-ordinates generated by 3 operations: the W+, W, and Z0 gauge bosons of the weak force. These massive weak bosons only interact with left-handed particles (left handed Weyl spinors). SU(2) describes doublets, matter-antimatter pairs such as mesons and (as this blog post is arguing) lepton-antilepton charge pairs in general (electric charge mechanism as well as weak isospin).
3. ‘colour rotation’ would change quarks between colour charges (red, blue, green): this is described by symmetry unitary group SU(3). Colour rotation leads directly to the Standard Model symmetry unitary group SU(3), i.e., rotations in imaginary space with 3 complex co-ordinates generated by 8 operations, the strong force gluons. There is also the concept of ‘flavor’ referring to the different types of quarks (up and down, strange and charm, top and bottom). SU(3) describes triplets of charges, i.e. baryons.
U(1) is a relatively simple phase-transformation symmetry which has a single group generator, leading to a single electric charge. (Hence, you have to treat positive charge as electrons moving backwards in time to make it incorporate antimatter! This is false because things don’t travel backwards in time; it violates causality, because we can use pair-production – e.g. electron and positron pairs created by the shielding of gamma rays from cobalt-60 using lead – to create positrons and electrons at the same time, when we choose.) Moreover, it also only gives rise to one type of massless gauge boson, which means it is a failure to predict the strength of electromagnetism and its causal mechanism of electromagnetism (attractions between dissimilar charges, repulsions between similar charges, etc.). SU(2) must be used to model the causal mechanism of electromagnetism and gravity; two charged massless gauge bosons mediate electromagnetic forces, while the neutral massless gauge boson mediates gravitation. Both the detailed mechanism for the forces and the strengths of the interactions (as well as various other predictions), arise automatically from SU(2) with massless gauge bosons replacing U(1).
Fig. 1 - The imaginary U(1) interaction of a photon with an electron, which is fine for photons interacting with electrons, but doesn't adequately describe the mechanism by which electromagnetic gauge bosons produce electromagnetic forces!
Fig. 1: The imaginary U(1) gauge invariance of quantum electrodynamics (QED) simply consists of a description of the interaction of a photon with an electron (e is the coupling constant, the effective electric charge after allowing for shielding by the polarized vacuum if the interaction is at high energy, i.e., above the IR cutoff). When the electron’s field undergoes a local phase change, a gauge field quanta called a ‘virtual photon’ is produced, which keeps the Lagrangian invariant; this is how gauge symmetry is supposed to work for U(1).
This doesn’t adequately describe the mechanism by which electromagnetic gauge bosons produce electromagnetic forces! It’s just too simplistic: the moving electron is viewed as a current, and the photon (field phase) affects that current by interacting by the electron. There is nothing wrong with this simple scheme, but it has nothing to do with the detailed causal, predictive mechanism for electromagnetic attraction and repulsion, and to make this virtual-photon-as-gauge-boson idea work for electromagnetism, you have to add two extra polarizations to the normal two polarizations (electric and magnetic field vectors) of ordinary photons. You might as well replace the photon by two charged massless gauge bosons, instead of adding two extra polarizations! You have so much more to gain from using the correct physics, than adding extra epicycles to a false model to ‘make it work’.
This is Feynman’s explanation in his book QED, Penguin, 1990, p120:
The gauge bosons of mainstream electromagnetic model U(1) are supposed to consist of photons with 4 polarizations, not 2. However, U(1) has only one type of electric charge: negative charge. Positive charge is antimatter and is not included. But in the real universe there as much positive as negative charge around!
We can see this error of U(1) more clearly when considering the SU(3) strong force: the 3 in SU(3) tells us there are three types of color charges, red, blue and green. The anti-charges are anti-red, anti-blue and anti-green, but these anti-charges are not included. Similarly, U(1) only contains one electric charge, negative charge. To make it a reliable and complete theory predictive everything, it should contain 2 electric charges: positive and negative, and 3 gauge bosons: positive charged massless photons for mediating positive electric fields, negative charged massless photons for mediating negative electric fields, and neutral massless photons for mediating gravitation. The way this correct SU(2) electrogravity unification works was clearly explained in Figures 4 and 5 of the earlier post:
Basically, photons are neutral because if they were charged as well as being massless, the magnetic field generated by its motion would produce infinite self-inductance. The photon has two charges (positive electric field and negative electric field) which each produce magnetic fields with opposite curls, cancelling one another and allowing the photon to propagate:
Fig. 2 - Mechanism of gauge bosons for electromagnetism
Fig. 2: charged gauge boson mechanism for electromagnetism, as illustrated by the Catt-Davidson-Walton work in charging up transmission lines like capacitors and checking what happens when you discharge the energy through a sampling oscilloscope. They found evidence, discussed in detail in previous posts on this blog, that the existence of an electric field is represented by two opposite-travelling (gauge boson radiation) light velocity field quanta: while overlapping, the electric fields of each add up (reinforce) but the magnetic fields disappear because the curls of the magnetic field components cancel once there is equilibrium of the exchange radiation going along the same path in opposite directions. Hence, electric fields are due to charged, massless gauge bosons with Poynting vectors, being exchanged between fermions. Magnetic fields are cancelled out in certain configurations (such as that illustrated) but in other situations where you send two gauge bosons of opposite charge through one another (in the figure the gauge bosons modelled by electricity have the same charge), you find that the electric field vectors cancel out to give an electrically neutral field, but the magnetic field curls can then add up, explaining magnetism.
The evidence for Fig. 2 is presented near the end of Catt’s March 1983 Wireless World article called ‘Waves in Space’ (typically unavailable on the internet, because Catt won’t make available the most useful of his papers for free): when you charge up x metres of cable to v volts, you do so at light speed, and there is no mechanism for the electromagnetic energy to slow down when the energy enters the cable. The nearest page Catt has online about this is here: the battery terminals of a v volt battery are indeed at v volts before you connect a transmission line to them, but that’s just because those terminals have been charged up by field energy which is flowing in all directions at light velocity, so only half of the total energy, v/2 volts, is going one way and half is going the other way. Connect anything to that battery and the initial (transient) output at light speed is only half the battery potential; the full battery potential only appears in a cable connected to the battery when the energy has gone to the far end of the cable at light speed and reflected back, adding to further in-flowing energy from the battery on the return trip, and charging the cable to v/2 + v/2 = v volts.
Because electricity is so fast (light speed for the insulator), early investigators like Ampere and Maxwell (who candidly wrote in the 1873 edition of his Treatise on Electricity and Magnetism, 3rd ed., Article 574: ‘… there is, as yet, no experimental evidence to shew whether the electric current… velocity is great or small as measured in feet per second. …’) had no idea whatsoever of this crucial evidence which shows what electricity is all about. So when you discharge the cable, instead of getting a pulse at v volts coming out with a length of x metres (i.e., taking a time of t = x/c seconds), you instead get just what is predicted by Fig. 2: a pulse of v/2 volts taking 2x/c seconds to exit. In other words, the half of the energy already moving towards the exit end, exits first. That gives a pulse of v/2 volts lasting x/c seconds. Then the half of the energy going initially the wrong way has had time to go to the far end, reflect back, and follow the first half of the energy. This gives the second half of the output, another pulse of v/2 volts lasting for another x/c seconds and following straight on from the first pulse. Hence, the observer measures an output of v/2 volts lasting for a total duration of 2x/c seconds. This is experimental fact. It was Oliver Heaviside – who translated Maxwell’s 20 long-hand differential equations into the four vector equations (two divs, two curls) – who experimentally discovered the first evidence for this when solving problems with the Newcastle-Denmark undersea telegraph cable in 1875, using ‘Morse Code’ (logic signals). Heaviside’s theory is flawed physically because he treated rise times as instantaneous, a flaw inherited by Catt, Davidson, and Walton, which blocks a complete understanding of the mechanisms at work. The Catt, Davidson and Walton history is summarised here
[The original Catt-Davidson-Walton paper can be found here (first page) and here (second page) although it contains various errors. My discussion of it is here. For a discussion of the two major awards Catt received for his invention of the first ever practical wafer-scale memory to come to market despite censorship such as the New Scientist of 12 June 1986, p35, quoting anonymous sources who called Catt ‘either a crank or visionary’ – a £16 million British government and foreign sponsored 160 MB ‘chip’ wafer back in 1988 – see this earlier post and the links it contains. Note that the editors of New Scientist are still vandals today. Jeremy Webb, current editor of New Scientist, graduated in physics and solid state electronics, so he has no good excuse for finding this stuff – physics and electronics – over his head. The previous editor to Jeremy was Dr Alum M. Anderson who on 2 June 1997 wrote to me the following insult to my intelligence: ‘I’ve looked through the files and can assure you that we have no wish to suppress the discoveries of Ivor Catt nor do we publish only articles from famous people. You should understand that New Scientist is not a primary journal and does not publish the first accounts of new experiments and original theories. These are better submitted to an academic journal where they can be subject to the usual scientific review. New Scientist does not maintain the large panel of scientific referees necessary for this review process. I’m sure you understand that science is now a gigantic enterprise and a small number of scientifically-trained journalists are not the right people to decide which experiments and theories are correct. My advice would be to select an appropriate journal with a good reputation and send Mr Catt’s work there. Should Mr Catt’s theories be accepted and published, I don’t doubt that he will gain recognition and that we will be interested in writing about him.’ Both Catt and I had already sent Dr Anderson abstracts from Catt’s peer-reviewed papers such as IEEE Trans. on Electronic Computers, vol. EC-16, no. 6, Dec. 67. Also Proc. IEE, June 83 and June 87. Also a summary of the book “Digital Hardware Design” by Catt et. al., pub. Macmillan 1979. I wrote again to Dr Anderson with this information, but he never published it; Catt on 9 June 1997 published his response on the internet which he carbon copied to the editor of New Scientist. Years later, when Jeremy Webb had taken over, I corresponded with him by email. The first time Jeremy responded was on an evening in Dec 2002, and all he wrote was a tirade about his email box being full when writing a last-minute editorial. I politely replied that time, and then sent him by recorded delivery a copy of the Electronics World January 2003 issue with my cover story about Catt’s latest invention for saving lives. He never acknowledged it or responded. When I called the office politely, his assistant was rude and said she had thrown it away unread without him seeing it! I sent another but yet again, Jeremy wasted time and didn’t publish a thing. According to the Daily Telegraph, 24 Aug. 2005: ‘Prof Heinz Wolff complained that cosmology is “religion, not science.” Jeremy Webb of New Scientist responded that it is not religion but magic. … “If I want to sell more copies of New Scientist, I put cosmology on the cover,” said Jeremy.’ But even when Catt’s stuff was applied to cosmology in Electronics World Aug. 02 and Apr. 03, it was still ignored by New ScientistHelene Guldberg has written a ‘Spiked Science’ article called Eco-evangelism about Jeremy Webb’s bigoted policies and sheer rudeness, while Professor John Baez has publicised the decline of New Scientist due to the junk they publish in place of solid physics. To be fair, Jeremy was polite to Prime Minister Tony Blair, however. I should also add that Catt is extremely rude in refusing to discuss facts. Just because he has a few new solid facts which have been censored out of mainstream discussion even after peer-reviewed publication, he incorrectly thinks that his vast assortment of more half-baked speculations are equally justified. For example, he refuses to discuss or co-author a paper on the model here. Catt does not understand Maxwell’s equations (he thinks that if you simply ignore 18 out of 20 long hand Maxwell differential equations and show that when you reduce the number of spatial dimensions from 3 to 1, then – since the remaining 2 equations in one spatial dimension contain two vital constants – that means that Maxwell’s equations are ‘shocking … nonsense’, and he refuses to accept that he is talking complete rubbish in this empty argument), and since he won’t discuss physics he is not a general physics authority, although he is expert in experimental research on logic signals, e.g., his paper in IEEE Trans. on Electronic Computers, vol. EC-16, no. 6, Dec. 67.]
Fig. 3 - Coulomb force mechanism for electric charged massless gauge bosons
Fig. 3: Coulomb force mechanism for electric charged massless gauge bosons. The SU(2) electrogravity mechanism. Think of two flak-jacket protected soldiers firing submachine guns towards one another, while from a great distance other soldiers (who are receding from the conflict) fire bullets in at both of them. They will repel because of net outward force on them, due to successive impulses both from bullet strikes received on the sides facing one another, and from recoil as they fire bullets. The bullets hitting their backs have relatively smaller impulses since they are coming from large distances and so due to drag effects their force will be nearly spent upon arrival (analogous to the redshift of radiation emitted towards us by the bulk of the receding matter, at great distances, in our universe). That explains the electromagnetic repulsion physically. Now think of the two soldiers as comrades surrounded by a mass of armed savages, approaching from all sides. The soldiers stand back to back, shielding one another’s back, and fire their submachine guns outward at the crowd. In this situation, they attract, because of a net inward acceleration on them, pushing their backs toward towards one another, both due to the recoils of the bullets they fire, and from the strikes each receives from bullets fired in at them. When you add up the arrows in this diagram, you find that attractive forces between dissimilar unit charges have equal magnitude to repulsive forces between similar unit charges. This theory holds water!
This predicts the right strength of gravity, because the charged gauge bosons will cause the effective potential of those fields in radiation exchanges between similar charges throughout the universe (drunkard’s walk statistics) to multiply up the average potential between two charges by a factor equal to the square root of the number of charges in the universe. This is so because any straight line summation will on average encounter similar numbers of positive and negative charges as they are randomly distributed, so such a linear summation of the charges that gauge bosons are exchanged between cancels out. However, if the paths of gauge bosons exchanged between similar charges are considered, you do get a net summation.
Fig. 4 - Charged gauge bosons mechanism and how the potential adds up
Fig. 4: Charged gauge bosons mechanism and how the potential adds up, predicting the relatively intense strength (large coupling constant) for electromagnetism relative to gravity according to the path-integral Yang-Mills formulation. For gravity, the gravitons (like photons) are uncharged, so there is no adding up possible. But for electromagnetism, the attractive and repulsive forces are explained by charged gauge bosons. Notice that massless charge electromagnetic radiation (i.e., charged particles going at light velocity) is forbidden in electromagnetic theory (on account of the infinite amount of self-inductance created by the uncancelled magnetic field of such radiation!) only if the radiation is going solely in only one direction, and this is not the case obviously for Yang-Mills exchange radiation, where the radiant power of the exchange radiation from charge A to charge B is the same as that from charge B to charge A (in situations of equilibrium, which quickly establish themselves). Where you have radiation going in opposite directions at the same time, the handedness of the curl of the magnetic field is such that it cancels the magnetic fields completely, preventing the self-inductance issue. Therefore, although you can never radiate a charged massless radiation beam in one direction, such beams do radiate in two directions while overlapping. This is of course what happens with the simple capacitor consisting of conductors with a vacuum dielectric: electricity enters as electromagnetic energy at light velocity and never slows down. When the charging stops, the trapped energy in the capacitor travels in all directions, in equimibrium, so magnetic fields cancel and can’t be observed. This is proved by discharging such a capacitor and measuring the output pulse with a sampling oscilloscope.
The price of the random walk statistics needed to describe such a zig-zag summation (avoiding opposite charges!) is that the net force is not approximately 1080 times the force of gravity between a single pair of charges (as it would be if you simply add up all the charges in a coherent way, like a line of aligned charged capacitors, with linearly increasing electric potential along the line), but is the square root of that multiplication factor on account of the zig-zag inefficiency of the sum, i.e., about 1040 times gravity. Hence, the fact that equal numbers of positive and negative charges are randomly distributed throughout the universe makes electromagnetism strength only 1040/1080 = 10-40 as strong as it would be if all the charges were aligned in a row like a row of charged capacitors (or batteries) in series circuit. Since there are around 1080 randomly distributed charges, electromagnetism as multiplied up by the fact that charged massless gauge bosons are Yang-Mills radiation being exchanged between all charges (including all charges of similar sign) is 1040 times gravity. You could picture this summation by the physical analogy of a lot of charged capacitor plates in space, with the vacuum as the dielectric between the plates. If the capacitor plates come with two opposite charges and are all over the place at random, the average addition of potential works out as that between one pair of charged plates multiplied by the square root of the total number of pairs of plates. This is because of the geometry of the addition. Intuitively, you may incorrectly think that the sum must be zero because on average it will cancel out. However, it isn’t, and is like the diffusive drunkard’s walk where the average distance travelled is equal to the average length of a step multiplied by the square root of the number of steps. If you average a large number of different random walks, because they will all have random net directions, the vector sum is indeed zero. But for individual drunkard’s walks, there is the factual solution that a net displacement does occur. This is the basis for diffusion. On average, gauge bosons spend as much time moving away from us as towards us while being exchanged between the charges of the universe, so the average effect of divergence is exactly cancelled by the average convergence, simplifying the calculation. This model also explains why electromagnetism is attractive between dissimilar charges and repulsive between similar charges.
For some of the many quantitative predictions and tests of this model, see previous posts such as this one.
SU(2), as used in the SU(2)xU(1) electroweak symmetry group, applies only to left-handed particles. So it’s pretty obvious that half the potential application of SU(2) is being missed out somehow in SU(2)xU(1).
SU(2) is fairly similar to U(1) in Fig. 1 above, except that SU(2) involves 22 – 1 = 3 types of charges (positive, negative and neutral), which (by moving) generate 2 types of charged currents (positive and negative currents) and 1 neutral current (i.e., the motion of an uncharged particle produces a neutral current by analogy to the process whereby the motion of a charged particle produces a charged current), requiring 3 types of gauge boson (W+, W, and Z0).
For weak interactions we need the whole of SU(2)xU(1) because SU(2) models weak isospin by using electric charges as generators, while U(1) is used to represent weak hypercharge, which looks almost identical to Fig. 1 (which illustrates the use of U(1) for quantum electrodynamics). The SU(2) isospin part of the weak interaction SU(2)xU(1) applies to only left-handed fermions, while the U(1) weak hypercharge part applies to both types of handedness, although the weak hypercharges of left and right handed fermions are not the same (see earlier post for the weak hypercharges of fermions with different spin handedness).
It is interesting that the correct SU(2) symmetry predicts massless versions of the weak gauge bosons (W+, W, and Z0). Then the mainstream go to a lot of trouble to make them massive by adding some kind of speculative Higgs field, without considering whether the massless versions really exist as the proper gauge bosons of electromagnetism and gravity. A lot of the problem is that the self-interaction of charged massless gauge bosons is a benefit in explaining the mechanism of electromagnetism (since two similar charged electromagnetic energy currents flowing through one another cancel out each other’s magnetic fields, preventing infinite self-inductance, and allowing charged massless radiation to propagate freely so long as it is exchange radiation in equilibrium with equal amounts flowing from charge A to charge B as flow from charge B to charge A; see Fig. 5 of the earlier post here). Instead of seeing how the mutual interactions of charged gauge bosons allow exchange radiation to propagate freely without complexity, the mainstream opinion is that this might (it can’t) cause infinities because of the interactions. Therefore, mainstream (false) consensus is that weak gauge bosons have to have a great mass, simply in order to remove an enormous number of unwanted complex interactions! They simply are not looking at the physics correctly.
U(2) and unification
Dr Woit has some ideas on how to proceed with the Standard Model: ‘Supersymmetric quantum mechanics, spinors and the standard model’, Nuclear Physics, v. B303 (1988), pp. 329-42; and ‘Topological quantum theories and representation theory’, Differential Geometric Methods in Theoretical Physics: Physics and Geometry, Proceedings of NATO Advanced Research Workshop, Ling-Lie Chau and Werner Nahm, Eds., Plenum Press, 1990, pp. 533-45. He summarises the approach in
The SU(3) strong force (colour charge) gauge symmetry
The SU(3) strong interaction – which has 3 color charges (red, blue, green) and 32 – 1 = 8 gauge bosons – is again virtually identical to the U(1) scheme in Fig. 1 above (except that there are 3 charges and 8 spin-1 gauge bosons called gluons, instead of the alleged 1 charge and 1 gauge boson in the flawed U(1) model of QED, and the 8 gluons carry color charge, whereas the photons of U(1) are uncharged). The SU(3) symmetry is actually correct because it is an empirical model based on observed particle physics, and the fact that the gauge bosons of SU(3) do carry colour makes it a proper causal model of short range strong interactions, unlike U(1). For an example of the evidence for SU(3), see the illustration and history discussion in this earlier post.SU(3) is based on an observed (empirical, experimentally determined) particle physics symmetry scheme called the eightfold way. This is pretty solid experimentally, and summarised all the high energy particle physics experiments from about the end of WWII to the late 1960s. SU(2) describes the mesons which were originally studied in natural cosmic radiation (pions were the first mesons discovered, and they were found in cosmic radiation from outer space in 1947, at Bristol University). A type of meson, the pion, is the long-range mediator of the strong nuclear force between nucleons (neutrons and protons), which normally prevents the nuclei of atoms from exploding under the immense Coulomb repulsion of having many protons confined in the small space of the nucleus. The pion was accepted as the gauge boson of the strong force predicted by Japanese physicist Yukawa, who in 1949 was awarded the Nobel Prize for predicting that meson right back in 1935. So there is plenty of evidence for both SU(3) color forces and SU(2) isospin. The problems all arise from U(1). To give an example of how SU(3) works well with charged gauge bosons, gluons, remember that this property of gluons is responsible for the major discovery of asymptotic freedom of confined quarks. What happens is that the mutual interference of the 8 different types of charged gluons with pairs of virtual quarks and virtual antiquarks at very small distances between particles (high energy) weakens the color force. The gluon-gluon interactions screen the color charge at short distances because each gluon contains two color charges. If each gluon contained just one color charge, like the virtual fermions in pair production in QED, then the screening effect would be most significant at large, rather than short, distances. Because the effective colour charge diminishes at very short distances, for a particular range of distances this color charge fall as you get closer offsets the inverse-square force law effect (the divergence of effective field lines), so the quarks are completely free – within given limits of distance – to move around within a neutron or a proton. This is asymptotic freedom, an idea from SU(3) that was published in 1973 and resulted in Nobel prizes in 2004. Although colour charges are confined in this way, some strong force ‘leaks out’ as virtual hadrons like neutral pions and rho particles which account for the strong force on the scale of nuclear physics (a much larger scale than is the case in fundamental particle physics): the mechanism here is similar to the way that atoms which are electrically neutral as a whole can still attract one another to form molecules, because there is a residual of the electromagnetic force left over. The strong interaction weakens exponentially in addition to the usual fall in potential (1/distance) or force (inverse square law), so at large distances compared to the size of the nucleus it is effectively zero. Only electromagnetic and gravitational forces are significant at greater distances. The weak force is very similar to the electromagnetic force but is short ranged because the gauge bosons of the weak force are massive. The massiveness of the weak force gauge bosons also reduces the strength of the weak interaction compared to electromagnetism.
The mechanism for the fall in color charge coupling strength due to interference of charged gauge bosons is not the whole story. Where is the energy of the field going where the effective charge falls as you get closer to the middle? Obvious answer: the energy lost from the strong color charges goes into the electromagnetic charge. Remember, short-range field charges fall as you get closer to the particle core, while electromagnetic charges increase; these are empirical facts. The strong charge decreases sharply from about 137e at the greatest distances it extends to (via pions) to around 0.15e at 91 GeV, while over the same range of scattering energies (which are appriximately inversely proportional to the distance from the particle core), the electromagnetic charge has been observed to increase by 7%. We need to apply a new type of continuity equation to the conservation of gauge boson exchange radiation energy of all types, in order to deduce vital new physical insights from the comparison of these figures for charge variation as a function of distance. The suggested mechanism in a previous post is:
‘We have to understand Maxwell’s equations in terms of the gauge boson exchange process for causing forces and the polarised vacuum shielding process for unifying forces into a unified force at very high energy. If you have one force (electromagnetism) increase, more energy is carried by virtual photons at the expense of something else, say gluons. So the strong nuclear force will lose strength as the electromagnetic force gains strength. Thus simple conservation of energy will explain and allow predictions to be made on the correct variation of force strengths mediated by different gauge bosons. When you do this properly, you learn that stringy supersymmetry first isn’t needed and second is quantitatively plain wrong. At low energies, the experimentally determined strong nuclear force coupling constant which is a measure of effective charge is alpha = 1, which is about 137 times the Coulomb law, but it falls to 0.35 at a collision energy of 2 GeV, 0.2 at 7 GeV, and 0.1 at 200 GeV or so. So the strong force falls off in strength as you get closer by higher energy collisions, while the electromagnetic force increases! Conservation of gauge boson mass-energy suggests that energy being shielded form the electromagnetic force by polarized pairs of vacuum charges is used to power the strong force, allowing quantitative predictions to be made and tested, debunking supersymmetry and existing unification pipe dreams.’
Force strengths as a function of distance from a particle core
I’ve written previously that the existing graphs showing U(1), SU(2) and SU(3) force strengths as a function of energy are pretty meaningless; they do not specify which particles are under consideration. If you scatter leptons at energies up to those which so far have been available for experiments, they don’t exhibit any strong force SU(3) interactions.What should be plotted is effective strong, weak and electromagnetic charge as a function of distance from particles. This is easily deduced because the distance of closest approach of two charged particles in a head-on scatter reaction is easily calculated: as they approach with a given initial kinetic energy, the repulsive force between them increases, which slows them down until they stop at a particular distance, and they are then repelled away. So you simply equate the initial kinetic energy of the particles with the potential energy of the repulsive force as a function of distance, and solve for distance. The initial kinetic energy is radiated away as radiation as they decelerate. There is some evidence from particle collision experiments that the SU(3) effective charge really does decrease as you get closer to quarks, while the electromagnetic charge increases. Levine and Koltick published in PRL (v.78, 1997, no.3, p.424) in 1997 that the electron’s charge increases from e to 1.07e as you go from low energy physics to collisions of electrons at an energy of 91 GeV, i.e., a 7% increase in charge. At low energies, the experimentally determined strong nuclear force coupling constant which is a measure of effective charge is alpha = 1, which is about 137 times the Coulomb law, but it falls to 0.35 at a collision energy of 2 GeV, 0.2 at 7 GeV, and 0.1 at 200 GeV or so.
The full investigation of running-couplings and the proper unification of the corrected Standard Model is the next priority for detailed investigation. (Some details of the mechanism can be found in several other recent posts on this blog, e.g., here.)
‘The observed couping constant for W’s is much the same as that for the photon – in the neighborhood of j [Feynman’s symbol j is related to alpha or 1/137.036… by: alpha = j^2 = 1/137.036…]. Therefore the possibility exists that the three W’s and the photon are all different aspects of the same thing. [This seems to be the case, given how the handedness of the particles allows them to couple to massive particles, explaining masses, chiral symmetry, and what is now referred to in the SU(2)xU(1) scheme as ‘electroweak symmetry breaking’.] Stephen Weinberg and Abdus Salam tried to combine quantum electrodynamics with what’s called the ‘weak interactions’ (interactions with W’s) into one quantum theory, and they did it. But if you just look at the results they get you can see the glue [Higgs mechanism problems], so to speak. It’s very clear that the photon and the three W’s [W+, W, and W0 /Z0 gauge bosons] are interconnected somehow, but at the present level of understanding, the connection is difficult to see clearly – you can still the ’seams’ [Higgs mechanism problems] in the theories; they have not yet been smoothed out so that the connection becomes … more correct.’ [Emphasis added.] – R. P. Feynman, QED, Penguin, 1990, pp141-142.Mechanism for loop quantum gravity with spin-1 (not spin-2) gravitons
Peter Woit gives a discussion of the basic principle of LQG in his book:
‘In loop quantum gravity, the basic idea is to use the standard methods of quantum theory, but to change the choice of fundamental variables that one is working with. It is well known among mathematicians that an alternative to thinking about geometry in terms of curvature fields at each point in a space is to instead think about the holonomy [whole rule] around loops in the space. The idea is that in a curved space, for any path that starts out somewhere and comes back to the same point (a loop), one can imagine moving along the path while carrying a set of vectors, and always keeping the new vectors parallel to older ones as one moves along. When one gets back to where one started and compares the vectors one has been carrying with the ones at the starting point, they will in general be related by a rotational transformation. This rotational transformation is called the holonomy of the loop. It can be calculated for any loop, so the holonomy of a curved space is an assignment of rotations to all loops in the space.’ – P. Woit, Not Even Wrong, Jonathan Cape, London, 2006, p189.
I watched Lee Smolin’s Perimeter Institute lectures, “Introduction to Quantum Gravity”, and he explains that loop quantum gravity is the idea of applying the path integrals of quantum field theory to quantize gravity by summing over interaction history graphs in a network (such as a Penrose spin network) which represents the quantum mechanical vacuum through which vector bosons such as gravitons are supposed to travel in a standard model-type, Yang-Mills, theory of gravitation. This summing of interaction graphs successfully allows a basic framework for general relativity to be obtained from quantum gravity.
It’s pretty evident that the quantum gravity loops are best thought of as being the closed exchange cycles of gravitons going between masses (or other gravity field generators like energy fields), to and fro, in an endless cycle of exchange. That’s the loop mechanism, the closed cycle of Yang-Mills exchange radiation being exchanged from one mass to another, and back again, continually.
According to this idea, the graviton interaction nodes are associated with the ‘Higgs field quanta’ which generates mass. Hence, in a Penrose spin network, the vertices represent the points where quantized masses exist. Some predictions from this are here.
Professor Penrose’s interesting original article on spin networks, Angular Momentum: An Approach to Combinatorial Space-Time, published in ‘Quantum Theory and Beyond’ (Ted Bastin, editor), Cambridge University Press, 1971, pp. 151-80, is available online, courtesy of Georg Beyerle and John Baez.
Update (25 June 2007):
Lubos Motl versus Mark McCutcheon’s book The Final Theory
Seeing that there is some alleged evidence that mainstream string theorists are bigoted charlatans, string theorist Dr Lubos Motl, who is soon leaving his Assistant Professorship at Harvard, made me uneasy when he attacked Mark McCutcheon’s book The Final Theory. Motl wrote a blog post attacking McCutcheon’s book by saying that: ‘Mark McCutcheon is a generic arrogant crackpot whose IQ is comparable to chimps.’ Seeing that Motl is a stringer, this kind of abuse coming from him sounds like praise to my ears. Maybe McCutcheon is not so wrong? Anyway, at lunch time today, I was in Colchester town centre and needed to look up a quotation in one of Feynman’s books. Directly beside Feynman’s QED book, on the shelf of Colchester Public Library, was McCutcheon’s chunky book The Final Theory. I found the time to look up what I wanted and to read all the equations in McCutcheon’s book.
Motl ignores McCutcheon’s theory entirely, and Motl is being dishonest when claiming: ‘his [McCutcheon’s] unification is based on the assertion that both relativity as well as quantum mechanics is wrong and should be abandoned.’
This sort of deception is easily seen, because it has nothing to do with McCutcheon’s theory! McCutcheon’s The Final Theory is full of boring controversy or error, such as the sort of things Motl quotes, but the core of the theory is completely different and takes up just two pages: 76 and 194. McCutcheon claims there’s no gravity because the Earth’s radius is expanding at an accelerating rate equal to the acceleration of gravity at Earth’s surface, g = 9.8 ms-2. Thus, in one second, Earth’s radius (in McCutcheon’s theory) expands by (1/2)gt2 = 4.9 m.
I showed in an earlier post that there is a simple relationship between Hubble’s empirical redshift law for the expansion of the universe (which can’t be explained by tired light ideas and so is a genuine observation) and acceleration:
Hubble recession: v = HR = dR/dt, so dt = dR/v, hence outward acceleration a = dv/dt = d[HR]/[dR/v] = vH = RH2
McCutcheon instead defines a ‘universal atomic expansion rate’ on page 76 of The Final Theory which divides the increase in radius of the Earth over a one second interval (4.9 m) into the Earth’s radius (6,378,000 m, or 6.378*106 m). I don’t like the fact he doesn’t specify a formula properly to define his ‘universal atomic expansion rate’.
McCutcheon should be clear: he is dividing (1/2)gt2 into radius of Earth, RE, to get his ‘universal atomic expansion rate, XA:
XA = (1/2)gt2/RE,
which is a dimensionless ratio. On page 77, McCutcheon honestly states: ‘In expansion theory, the gravity of an object or planet is dependent on it size. This is a significant departure from Newton’s theory, in which gravity is dependent on mass.‘ At first glance, this is a crazy theory, requiring Earth (and all the atoms in it, for he makes the case that all masses expand) to expand much faster than the rate of expansion of the universe.
However, on page 194, he argues that the outward acceleration of the an atom of radius R is:
a = XAR,
now the first thing to notice is that acceleration has units of ms-2 and R has units of m. So this equation is false dimensionally if XA = (1/2)gt2/RE. The only way to make a = XAR accurate dimensionally is to change the definition of XA by dropping t2 from the dimensionless ratio (1/2)gt2/RE to the ratio:
XA = (1/2)g/RE,
which has correct units of s-2. So we end up with this accurate version of McCutcheon’s formula for the outward acceleration of an atom of radius R (we will use the average radius of orbit of the chaotic electron path in the ground state of a hydrogen atom for R, which is 5.29*10-11 m):
a = XAR = [(1/2)g/RE]R, which can be equated to Newton’s formula for acceleration due to mass m, which is 1.67*10-27 kg:
a = [(1/2)g/RE]R
= mG/R2.
Hence, McCutcheon on page 194 calculates a value for G by rearranging these equations:
G = (1/2)gR3/(REm)
=(1/2)*(9.81)*(5.29*10-11)3 /[(6.378*106)*(1.67*10-27)]
= 6.82*10-11 m3/(kg*s2).
Which is only 2% higher than the measured value of
G = 6.673 *10-11 m3/(kg*s2).
After getting this result on page 194, McCutcheon remarks on page 195: ‘Recall … that the value for XA was arrived at by measuring a dropped object in relation to a hypothesized expansion of our overall planet, yet here this same value was borrowed and successfully applied to the proposed expansion of the tinest atom.
We can compress McCutcheon’s theory: what is he basically saying is the scaling ratio:
a = (1/2)g(R/RE) which when set equal to Newton’s law mG/R2, rearranges to give: G = (1/2)gR3/(REm).
However, McCutcheon’s own formula is just his guessed scaling law: a = (1/2)g(R/RE).
Although this quite accurately scales the acceleration of gravity at Earth’s surface (g at RE) to the acceleration of gravity at the ground state orbit radius of a hydrogen atom (a at R), it is not clear if this is just a coincidence, or if it is really anything to do with McCutcheon’s expanding matter idea. He did not derive the relationship, he just defined it by dividing the increased radius into the Earth’s radius and then using this ratio in another expression which is again defined without a rigorous theory underpinning it. In its present form, it is numerology. Furthermore, the theory is not universal: ithe basic scaling law that McCutcheon obtains does not predict the gravitational attraction of the two balls Cavendish measured; instead it only relates the gravity at Earth’s surface to that at the surface of an atom, and then seems to be guesswork or numerology (although it is an impressively accurate ‘coincidence’). It doesn’t have the universal application of Newton’s law. There may be another reason why a = (1/2)g(R/RE) is a fairly accurate and impressive relationship.
Since I regularly oppose censorship based on fact-ignoring consensus and other types of elitist fascism in general (fascism being best defined as the primitive doctrine that ‘might is right’ and who speaks loudest or has the biggest gun is the scientifically correct), it is only correct that I write this blog post to clarify the details that really are interesting.
Maybe McCutcheon could make his case better to scientists by putting the derivation and calculation of G on the front cover of his book, instead of a sunset. Possibly he could justify his guesswork idea to crackpot string theorists by some relativistic obfuscation invoking Einstein, such as:
‘According to relativity, it’s just as reasonable to think as the Earth zooming upwards up to hit you when you jump off a cliff, as to think that you are falling downward.’
If he really wants to go down the road of mainstream hype and obfuscation, he could maybe do even better by invoking the popular misrepresentation of Copernicus:
‘According to Copernicus, the observer is at ‘no special place in the universe’, so it is as justifiable to consider the Earth’s surface accelerating upwards to meet you, as vice-versa. Copernicus used a spaceship to travel all throughout the entire universe on a spaceship or a flying carpet to confirm the crackpot modern claim that we are not at a special place in the universe, you know.’
The string theorists would love that kind of thing (i.e., assertions that there is no preferred reference frame, based on lies) seeing that they think spacetime is 10 or 11 dimensional, based on lies.
My calculation of G is entirely different, being due to a causal mechanism of graviton radiation, and it has detailed empirical (non-speculative) foundations to it, and a derivation which predicts G in terms of the Hubble parameter and the local density:
G = (3/4)H2/(rπe3),
plus a lot of other things about cosmology, including the expansion rate of the universe at long distances in 1996 (two years before it was confirmed by Saul Perlmutter’s observations in 1998). However, this is not necessarily incompatible with McCutcheon’s theory. There are such things as mathematical dualities: where completely different calculations are really just different ways of modelling the same thing.
McCutcheon’s book is not just the interesting sort of calculation above, sadly. It also contains a large amount of drivel (particularly in the first chapter) about his alleged flaw in the equation: W = Fd or work energy = force applied * distance moved by force in the direction that the force operates. McCutcheon claims that there is a problem with this formula, and that work energy is being used continuously by gravity, violating conservation of energy. On page 14 (2004 edition) he claims falsely: ‘Despite the ongoing energy expended by Earth’s gravity to hold objects down and the moon in orbit, this energy never diminishes in strength…’
The error McCutcheon is making here is that no energy is used up unless gravity is making an object move. So the gravity field is not depleted of a single Joule of energy when an object is simply held in one place by gravity. For orbits, gravity force acts at right angles to the distance the moon is going in its orbit, so gravity is not using up energy in doing work on the moon. If the moon was falling straight down to earth, then yes, the gravitational field would be losing energy to the kinetic energy that the moon would gain as it accelerated. But it isn’t falling: the moon is not moving towards us along the lines of gravitational force; instead it is moving at right angles to those lines of force. McCutcheon does eventually get to this explanation on page 21 of his book (2004 edition). But this just leads him to write several more pages of drivel about the subject: by drivel, I mean philosophy. On a positive note, McCutcheon near the end of the book (pages 297-300 of the 2004 edition) correctly points out that that where two waves of equal amplitude and frequency are superimposed (i.e., travel through one another) exactly out of phase, their waveforms cancel out completely due to ‘destructive interference’. He makes the point that there is an issue for conservation of energy where such destructive interference occurs. For example, Young claimed that destructive interference of light occurs at the dark fringes on the screen in the double-slit experiment. Is it true that two out-of-phase photons really do arrive at the dark fringes, cancelling one another out? Clearly, this would violate conservation of energy! Back in February 1997, when I was editor of Science World magazine (ISSN 1367-6172), I published an article by the late David A. Chalmers on this subject. Chalmers summed the Feynman path integral for the two slits and found that if Young’s explanation was correct, then half of the total energy would be unaccounted for in the dark fringes. The photons are not arriving at the dark fringes. Instead, they arrive in the bright fringes.
The interference of radio waves and other phased waves is also known as the Hanbury-Brown-Twiss effect, whereby if you have two radio transmitter antennae, the signal that can be received depends on the distance between them: moving they slightly apart or together changes the relative phase of the transmitted signal from one with respect to the other, cancelling the signal out or reinforcing it. (It depends on the frequencies and amplitude as well: if both transmitters are on the same frequency and have the same output amplitude and radiation power, then perfectly destructive interference if they are exactly out of phase, or perfect reinforcement – constructive interference – if they are exactly in-phase, will occur.) This effect also actually occurs in electricity, replacing Maxwell’s mechanical ‘displacement current’ of vacuum dielectric charges.
Feynman quotation
The Feynman quotation I located is this:
‘When we look at photons on a large scale – much larger than the distance required for one stopwatch turn – the phenomena that we see are very well approximated by rules such as ‘light travels in straight lines’ because there are enough paths around the path of minimum time to reinforce each other, and enough other paths to cancel each other out. But when the space through which a photon moves becomes too small (such as the tiny holes in the screen), these rules fail – we discover that light doesn’t have to go in straight lines, there are interferences created by two holes, and so on. The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that there is no main path, no ‘orbit’; there are all sorts of ways the electron could go [influenced by the randomly occurring fermion pair-production in the strong electric field on small distance scales, according to quantum field theory], each with an amplitude. The phenomenon of interference becomes very important, and we have to sum the arrows to predict where an electron is likely to go.’
– R. P. Feynman, QED, Penguin, London, 1990, pp. 84-5. (Emphasis added in bold.)
Compare that to:
‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations [between virtual particles in the quantum foam vacuum and real electrons, etc.], as I proposed [in the 1934 book The Logic of Scientific Discovery]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation …’ – Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303.
Heisenberg quantum mechanics: Poincare chaos applies on the small scale, since the virtual particles of the Dirac sea in the vacuum regularly interact with the electron and upset the orbit all the time, giving wobbly chaotic orbits which are statistically described by the Schroedinger equation – it’s causal, there is no metaphysics involved. The main error is the false propaganda that ‘classical’ physics models contain no inherent uncertainty (dice throwing, probability): chaos emerges even classically from the 3+ body problem, as first shown by Poincare.
Anti-causal hype for quantum entanglement: Dr Thomas S. Love of California State University has shown that entangled wavefunction collapse (and related assumptions such as superimposed spin states) are a mathematical fabrication introduced as a result of the discontinuity at the instant of switch-over between time dependent and time independent versions of Schroedinger at time of measurement.
Just as the Copenhagen Interpretation was supported by lies (such as von Neumann’s false ‘disproof’ of hidden variables in 1932) and fascism (such as the way Bohm was treated by the mainstream when he disproved von Neumann’s ‘proof’ in the 1950s), string ‘theory’ (it isn’t a theory) is supported by similar tactics which are political in nature and have nothing to do with science:
‘String theory has the remarkable property of predicting gravity.’ – Dr Edward Witten, M-theory originator, Physics Today, April 1996. ‘The critics feel passionately that they are right, and that their viewpoints have been unfairly neglected by the establishment. … They bring into the public arena technical claims that few can properly evaluate. … Responding to this kind of criticism can be very difficult. It is hard to answer unfair charges of élitism without sounding élitist to non-experts. A direct response may just add fuel to controversies.’ – Dr Edward Witten, M-theory originator, Nature, Vol 444, 16 November 2006.
‘Superstring/M-theory is the language in which God wrote the world.’ – Assistant Professor Lubos Motl, Harvard University, string theorist and friend of Edward Witten, quoted by Professor Bert Schroer, (p. 21).
‘The mathematician Leonhard Euler … gravely declared: “Monsieur, (a + bn)/n = x, therefore God exists!” … peals of laughter erupted around the room …’ –
‘… I do feel strongly that this is nonsense! … I think all this superstring stuff is crazy and is in the wrong direction. … I don’t like it that they’re not calculating anything. I don’t like that they don’t check their ideas. I don’t like that for anything that disagrees with an experiment, they cook up an explanation – a fix-up to say “Well, it still might be true”. For example, the theory requires ten dimensions. Well, maybe there’s a way of wrapping up six of the dimensions. Yes, that’s possible mathematically, but why not seven? … In other words, there’s no reason whatsoever in superstring theory that it isn’t eight of the ten dimensions that get wrapped up … So the fact that it might disagree with experiment is very tenuous, it doesn’t produce anything; it has to be excused most of the time. … All these numbers … have no explanations in these string theories – absolutely none!’ – Richard P. Feynman, in Davies & Brown, Superstrings, 1988, pp 194-195. [Quoted by Tony Smith.]
Feynman predicted today’s crackpot run world in his 1964 Cornell lectures (broadcast on BBC2 in 1965 and published in his book Character of Physical Law, pp. 171-3):
‘The inexperienced, and crackpots, and people like that, make guesses that are simple, but [with extensive knowledge of the actual facts rather than speculation] you can immediately see that they are wrong, so that does not count. … There will be a degeneration of ideas, just like the degeneration that great explorers feel is occurring when tourists begin moving in on a territory.’
In the same book Feynman states:
Sent: 02/01/03 17:47 Subject: Your_manuscript LZ8276 Cook {gravity unification proof} Physical Review Letters does not, in general, publish papers on alternatives to currently accepted theories…. Yours sincerely, Stanley G. Brown, Editor, Physical Review Letters
‘If you are not criticized, you may not be doing much.’ – Donald Rumsfeld.
The Standard Model, which Edward Witten has done a lot of useful work on (before he went into string speculation), is the best tested physical theory. Forces result from radiation exchange in spacetime. The big bang matter’s speed is 0-c in spacetime of 0-15 billion years, so outward force F = ma = 1043 N. Newton’s 3rd law implies equal inward force, which from the Standard Model possibilities will be carried by gauge bosons (exchange radiation), predicting current cosmology, gravity and the contraction of general relativity, other forces and particle masses.
‘A fruitful natural philosophy has a double scale or ladder ascendant and descendant; ascending from experiments to axioms and descending from axioms to the invention of new experiments.’ – Novum Organum.
This predicts gravity in a quantitative, checkable way, from other constants which are being measured ever more accurately and will therefore result in more delicate tests. As for mechanism of gravity, the dynamics here which predict gravitational strength and various other observable and further checkable aspects, are consistent with LQG and Lunsford’s gravitational-electromagnetic unification in which there are 3 dimensions describing contractable matter (matter contracts due to its properties of gravitation and motion), and 3 expanding time dimensions (the spacetime between matter expands due to the big bang according to Hubble’s law).
‘Light … “smells” the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’ – Feynman, QED, Penguin, 1990, page 54.
That’s wave particle duality explained. The path integrals don’t mean that the photon goes on all possible paths but as Feynman says, only a “small core of nearby space”.
The double-slit interference experiment is very simple: the photon has a transverse spatal extent. If that overlaps two slits, then the photon gets diffracted by both slits, displaying interference. This is obfuscated by people claiming that the photon goes everywhere, which is not what Feynman says. It doesn’t take every path: most of the energy is transferred along the classical path, and is near that. Similarly, you find people saying that QFT says that the vacuum is full of loops of annihilation-creation. When you check what QFT says, it actually says that those loops are limited to the region between the IR and UV cutoff. If loops existed everywhere in spacetime, ie below the IR cutoff or beyond 1 fm, then the whole vacuum would be polarized enough to cancel out all real charges. If loops existed beyond the UV cutoff, ie to zero distance from a particle, then the loops would have infinite energy and momenta and the effects of those loops on the field would be infinite, again causing problems.
So the vacuum simply isn’t full of annihilation-creation loops (they only extend out to 1 fm around particles). The LQG loops are entirely different (exchange radiation) and cause gravity, not cosmological constant effects. Hence no dark energy mechanism can be attributed to the charge creation effects in the Dirac sea, which exists only close to real particles.
‘By struggling to find a mathematically precise formulation, one often discovers facets of the subject at hand that were not apparent in a more casual treatment. And, when you succeed, rigorous results (”Theorems”) may flow from that effort.
‘But, particularly in more speculative subject, like Quantum Gravity, it’s simply a mistake to think that greater rigour can substitute for physical input. The idea that somehow, by formulating things very precisely and proving rigorous theorems, correct physics will eventually emerge simply misconstrues the role of rigour in Physics.’ – Professor Jacques Distler, blog entry on The Role of Rigour.
‘[Unorthodox approaches] now seem the antithesis of modern science, with consensus and peer review at its very heart. … The sheer number of ideas in circulation means we need tough, sometimes crude ways of sorting…. The principle that new ideas should be verified and reinforced by an intellectual community is one of the pillars of scientific endeavour, but it comes at a cost.’ – Editorial, p5 of the 9 Dec 06 issue of New Scientist.
Far easier to say anything else is crackpot. String isn’t, because it’s mainstream, has more people working on it, and has a large number of ideas connecting one another. No ‘lone genius’ can ever come up with anything more mathematically complex, and amazingly technical than string theory ideas, which are the result of decades of research by hundreds of people. Ironically, the core of a particle is probably something like a string, albeit not the M-theory 10/11 dimensional string, just a small loop of energy which acquires mass by coupling to an external mass-giving bosonic field. It isn’t the basic idea of string which is necessarily wrong, but the way the research is done and the idea that by building a very large number of interconnected buildings on quicksand, it will be absurd for disaster to overcome the result which has no solid foundations. In spacetime, you can equally well interpret recession of stars as a variation of velocity with time past as seen from our frame of reference, or a variation of velocity with distance (the traditional ‘tunnel-vision’ due to Hubble).
Some people weirdly think Newton had a theory of gravity which predicted G, or that because Witten claimed in Physics Today magazine in 1996 that his stringy M-theory has the remarkable property of “predicting gravity”, he can do it. The editor of Physical Review Letters seemed to suggest this to me when claiming falsely that the facts above leading to a prediction of gravity etc is an “alternative to currently accepted theories”. Where is the theory in string? Where is the theory in M-”theory” which predicts G? It only predicts a spin-2 graviton mode for gravity, and the spin-2 graviton has never been observed. So I disagree with Dr Brown. This isn’t an alternative to a currently accepted theory. It’s tested and validated science, contrasted to currently accepted religious non-theory explaining an unobserved particle by using unobserved extra dimensional guesswork. I’m not saying string should be banned, but I don’t agree that science should be so focussed on stringy guesswork that the hard facts are censored out in consequence!)
There is some dark matter in the form of the mass of neutrinos and other radiations which will be attracted around galaxies and affect their rotation, but it is bizarre to try to use discrepancies in false theories as “evidence” for unobserved “dark energy” and “dark matter”, neither of which has been found in any particle physics experiment or detector in history. The “direct evidence of dark matter” seen in photos of distorted images don’t say what the “dark matter” is and we should remember that Ptolemy’s followers were rewarded for claiming direct evidence of the earth centred universe was apparent to everyone who looked at the sky. Science requires evidence, facts, and not faith based religion which ignores or censors out the evidence and the facts.
The reason for current popularity of M-theory is precisely that it claims to not be falsifiable, so it acquires a religious or mysterious allure to quacks, just as Ptolemy’s epicycles, phlogiston, caloric, Kelvin’s vortex atom and Maxwell’s mechanical gear box aether did in the past. Dr Peter Woit explains the errors and failures of mainstream string theory in his book Not Even Wrong (Jonathan Cape, London, 2006, especially pp 176-228): using the measured weak SU(2) and electromagnetic U(1) forces, supersymmetry predicts the SU(3) force incorrectly high by 10-15%, when the experimental data is accurate to a standard deviation of about 3%.
By claiming to ‘predict’ everything conceivable, it predicts nothing falsifiable at all and is identical to quackery, although string theory might contain some potentially useful spin-offs such as science fiction and some mathematics (similarly, Ptolemy’s epicycles theory helped to advance maths a little, and certainly Maxwell’s mechanical theory of aether led ultimately to a useful mathematical model for electromagnetism; Kelvin’s false vortex atom also led to some ideas about perfect fluids which have been useful in some aspects of the study of turbulence and even general relativity). Even if you somehow discovered gravitons, superpartners, or branes, these would not confirm the particular string theory model anymore than a theory of leprechauns would be confirmed by discovering small people. Science needs quantitative predictions.
Dr Imre Lakatos explains the way forward in his article ‘Science and Pseudo-Science’:
Really, there is nothing more anyone can do after making a long list of predictions which have been confirmed by new measurements, but are censored out of mainstream publications by the mainstream quacks of stringy elitism. Prof Penrose wrote this depressing conclusion well in 2004 in The Road to Reality so I’ll quote some pertinent bits from the British (Jonathan Cape, 2004) edition:
On page 1020 of chapter 34 ‘Where lies the road to reality?’, 34.4 Can a wrong theory be experimentally refuted?, Penrose says: ‘One might have thought that there is no real danger here, because if the direction is wrong then the experiment would disprove it, so that some new direction would be forced upon us. This is the traditional picture of how science progresses. Indeed, the well-known philosopher of science [Sir] Karl Popper provided a reasonable-looking criterion [K. Popper, The Logic of Scientific Discovery, 1934] for the scientific admissability [sic; mind your spelling Sir Penrose or you will be dismissed as a loony: the correct spelling is admissibility] of a proposed theory, namely that it be observationally refutable. But I fear that this is too stringent a criterion, and definitely too idealistic a view of science in this modern world of “big science”.’
On page 1026, Penrose gets down to the business of how science is really done: ‘In the present climate of fundamental research, it would appear to be much harder for individuals to make substantial progress than it had been in Einstein’s day. Teamwork, massive computer calculations, the pursuing of fashionable ideas – these are the activities that we tend to see in current research. Can we expect to see the needed fundamentally new perspectives coming out of such activities? This remains to be seen, but I am left somewhat doubtful about it. Perhaps if the new directions can be more experimentally driven, as was the case with quantum mechanics in the first third of the 20th century, then such a “many-person” approach might work.’
‘Cargo cult science is defined by Feynman as a situation where a group of people try to be scientists but miss the point. Like writing equations that make no checkable predictions… Of course if the equations are impossible to solve (like due to having a landscape of 10^500 solutions that nobody can handle), it’s impressive, and some believe it. A winning theory is one that sells the most books.’ –
Path integrals for gauge boson radiation versus path integrals for real particles, and Weyl’s gauge symmetry principle
The previous post plus a re-reading of Professor Zee’s Quantum Field Theory in a Nutshell (Princeton, 2003) suggests a new formulation for quantum gravity, the mechanism and mathematical predictions of which were given two posts ago.The sum over histories for real particles is used to work out the path of least action, such as the path of a photon of light which takes the least time to bounce off a mirror. You can do the same thing for the path of a real electron, or the path of a drunkard’s walk. The integral tells you the effective path taken by the particle, or the probability of any given path being taken, from many possible paths.
For gauge bosons or vector bosons, i.e., force-mediating radiation, the role of the path integral is no longer to find the probability of a path being taken or the effective path. Instead, gauge bosons are exchanged over many paths simultaneously. Hence there are two totally different applications of path integrals we are concerned with:
• Applying the path integral for real particles involves evaluating a lot of paths, most of which are not actually taken (the real particle takes only one of those paths, although as Feynman said, it uses a ‘small core of nearby space’ so it can be affected by both of two slits in a screen, provided those slits are close together, within a transverse wavelength or so, so the small core of paths taken overlap both slits).
• Applying the path integral for gauge bosons involves evaluating a lot of paths which are all actually being taken, because the extensive force field is composed of lots of gauge bosons being exchanged between charges, really going all over the place (for long-range gravity and electromagnetism).
In both cases the path taken by a given real particle or a single gauge boson must be composed of straight lines in between interactions (see Fig. 1 of previous post) because the curvature of general relativity appears to be a classical approximation to a lot of small discrete deflections due to discrete interactions with field quanta (sometimes curves are used in Feynman diagrams for convenience, but according to quantum field theory all mechanisms for curvature actually involve lots of little deflections by the quanta of fields).
The calculations of quantum gravity, two posts ago, use geometry to evaluate these straight-line gauge boson paths for gravity and electromagnetism. Presumably, translating the simplicity of the calculations based on geometry in that post into a path integrals will appeal more to the stringy mainstream. Loop quantum gravity methods of summing up a lot of interaction graphs will be used to do this. What is vital are directional asymmetries, which transform a perfect symmetry of gauge boson exchanges in all directions into a force, represented by the geometry of Fig. 1 (below). One way to convert that geometry into a formula is to consider the inward-outward travelling isotropic graviton exchange radiation by using divergence operator. I think this can be done easily because there are two useful physical facts which make the geometry simpler even than appears from Fig. 1: first, the shield area x in Fig. 1 is extremely small so the asymmetry cone can not ever have a large sized base for any practical situation; second, by Newton’s proof the gravity inverse square law force from a lot of little particles spread out in the Earth is the same as you get by mathematically assuming that all the little masses (fundamental particles) are not spread throughout a large planet but are all at the centre. So a path integral formulation for the geometry of Fig. 1 is simple.
Fig. 1: Mechanism for quantum gravity (a tiny falling test mass is located in the middle of the universe, which experiences isotropic graviton radiation - not necessarily spin 2 gravitons, preferably spin 1 gravitons which cause attraction by simply pushing things as this allows predictions as wel shall see - from all directions except that where there is an asymmetry produced by the mass which shields that radiation) . By Newton’s 3rd law the outward force of the big bang has an equal inward force, and gravity is equal to the proportion of that inward force covered by the shaded cone in this diagram: (force of gravity) = (total inward force).(cross sectional area of shield projected out to radius R, i.e., the area of the base of the cone marked x, which is the product of the shield’s cross-sectional area and the ratio R2/r2) / (total spherical area with radius R).
Fig. 1: Mechanism for quantum gravity (a tiny falling test mass is located in the middle of the universe, which experiences isotropic graviton radiation – spin 1 gravitons which cause attraction by simply pushing things as this allows predictions as proved in the earlier post – from all directions except that where there is an asymmetry produced by the mass which shields that radiation). By Newton’s 3rd law the outward force of the big bang has an equal inward force, and gravity is equal to the proportion of that inward force covered by the shaded cone in this diagram: (force of gravity) = (total inward force).(cross sectional area of shield projected out to radius R, i.e., the area of the base of the cone marked x, which is the product of the shield’s cross-sectional area and the ratio R2/r2) / (total spherical area with radius R). (Full proof here.)
Weyl’s gauge symmetry principle
A symmetry is anything that doesn’t change as the result of a transformation. For example, the colour of a plastic pen doesn’t change when you rotate it, so the colour is a symmetry of the pen when the transformation type is a rotation. If you transform the plastic pen by burning it, colour is not a symmetry of the pen (unless the pen was the colour of carbon in the first place).
A gauge symmetry is one where scalable quantities (gauges) are involved. For example, there is a symmetry in the fact that the same amount of energy is required to lift a 1 kg mass up by a height of 1 metre, regardless of the original height of the mass above sea level. (This example is not completely true, but it is almost true because the fall in gravity acceleration with height is small, as gravity is only 0.3% weaker at the top of the tallest mountain than it is at sea level.)
The female mathematician Emmy Noether in 1915 proved a great theorem which states that any continuous symmetry leads to a conservation law, e.g., the symmetry of physical laws (due to these laws remaining the same while time passes) leads to the principle of conservation of energy! This particularly impressive example of Noether’s theorem does not strictly apply to forces over very long time scales, because, as proved, fundamental force coupling constants (relative charges) increase in direct proportion to the age of the universe. However, the theorem is increasingly accurate as the time scale involved is reduced and the inaccuracy becomes trivial when the time considered is small compared to the age of the universe.
At the end of Quantum Field Theory in a Nutshell (at page 457), Zee points out that Maxwell’s equations unexpectedly contained two hidden symmetries, Lorentz invariance and gauge invariance: ‘two symmetries that, as we now know, literally hold the key to the secrets of the universe.’
He then argues that Maxwell’s long-hand differential equations masked these symmetries and it took Einstein’s genius to uncover them (special relativity for Lorentz invariance, general relativity for the tensor calculus with the repeated-indices summation convention, e.g., mathematical symbol compressions by defining notation which looks something like: Fab = 2dAab = daAb – dbAa). This is actually a surprisingly good point to make.
Zee, judging from what his Quantum Field Theory in a Nutshell book contains, does not seem to be aware how useful Heaviside’s vector calculus is (Heaviside compressed Maxwell’s 20 equations into 4 field equations plus a continuity equation for conservation of charge, while Einstein merely compressed the 4 field equations into 2, a less impressive feat but one leading to less intuitive equations; divergence and curl equations in vector calculus describe simple divergence of radial electric field lines which you can picture, and simple curling of electric or magnetic field lines which again are easy to picture). In addition, the way relativity comes from Maxwell’s equations is best expressed non-mathematically, just because it is so simple: if you move relative to an electric charge you get a magnetic field, if you don’t move relative to an electric charge you don’t see the magnetic field.
Zee adds: ‘it is entirely possible that an insightful reader could find a hitherto unknown symmetry hidden in our well-studied field theories.’
Well, he could start with the insight that U(1) doesn’t exist, as explained in the previous post. There are no single charged leptons about, only pairs of them. They are created in pairs, and are annihilated as pairs. So really you need some form of SU(2) symmetry to replace U(1). Such a replacement as a bonus predicts gravity and electromagnetism quantitatively, giving the coupling constants for each and the complete mechanism for each force.
Just to be absolutely lucid on this, so that there can be no possible confusion:
• SU(2) correctly asserts that quarks form quark-antiquark doublets due to the short-range weak force mediated by massive weak gauge bosons
• U(1) falsely asserts that leptons do not form doublets due to the long-range electromagnetic force mediated by mass-less electromagnetic gauge bosons.
The correct picture to replace SU(2)xU(1) is based on the same principle for SU(2) but a replacement of U(1) by another effect of SU(2):
• SU(2) also correctly asserts that leptons form lepton-antilepton doublets (although since the binding force is long-range electromagnetism instead of short-range massive weak gauge bosons, the lepton-antilepton doublets are not confined in a small place because the range over which the electromagnetic force operates is simply far greater than that of the weak force).
Solid experimentally validated evidence for this (including mechanisms and predictions of gravity and electromagnetism strengths, etc., from massless SU(2) gauge boson interactions which automatically explain gravity and electromagnetism): here. Sheldon Glashow’s early expansion of the original Yang-Mills SU(2) gauge interaction symmetry to unify electromagnetism and weak interactions is quoted here. More technical discussion on the relationship of leptons to quarks implies by the model: here.
However, innovation of a checkable sort is now unwelcome in mainstream stringy physics, so maybe Zee was joking, and maybe he secretly doesn’t want any progress (unless of course it comes from mainstream string theory). This suggestion is made because Zee on the same page (p457) adds that the experimentally-based theory of electromagnetic unification (unification of electricity and magnetism) was a failure to achieve its full potential because those physicists: ‘did not possess the mind-set for symmetry. The old paradigm “experiments -> action -> symmetry” had to be replaced in fundamental physics by the new paradigm “symmetry -> action -> experiments,” the new paradigm being typified by grand unified theory and later by string theory.’ (Emphasis added.)
Problem is, string theory has proved an inedible, stinking turkey (Lunsford both more politely and more memorably calls string ‘a vile and idiotic lie’ which ‘has managed to slough itself along for 20 years, leaving a shiny trail behind it’). I’ve explained politely why string theory is offensive, insulting, abusive, dictatorial ego-massaging, money-laundering pseudoscience at my domain
Zee needs to try reading Paul Feyerabend’s book, Against Method. Science actually works by taking the route that most agrees with nature, regardless of how unorthodox an idea is, or how crazy it superficially looks to the prejudiced who don’t bother to check it objectively before arriving at a conclusion on its merits; ‘science,’ when it does occasionally take the popular route that is a total and complete moronic failure, e.g., mainstream string, temporarily becomes a religion. String theorists are like fanatical preachers, trying to dictate to the gullible what nature is like ahead of any evidence, the very error Bohr alleged Einstein was making in 1927. Actually there is a strong connection between the speculative Copenhagen Interpretation propaganda of Bohr in 1927 (Bohr in fact had no solid evidence for his pet theory of metaphysics, while Einstein had every causal law and mechanism of physics on his side; today we all know from high-energy physics that virtual particles are an experimental physics fact and they cause indeterminancy in a simple mechanical way on small distance scales), and string. Both rely on exactly the same mixture of lies, hype, coercion, ridicule of factual evidence, etc. Both are religions. Neither is a science, and no matter how much physically vacuous mathematical obfuscation they use, it’s failure to cover-up the gross incompetence in basic physics remains as perfectly transparent as the Emperor’s new clothes. Unfortunately, most people see what they are told to see, so this farce of string theory continues.
Feynman diagrams in loop quantum gravity, path integrals, and the relationship of leptons to quarks
Fig. 1 - Quantum gravity versus smooth spacetime curvature of general relativity
Fig. 1: Comparison of a Feynman-style diagram for general relativity (smooth curvature of spacetime, i.e., smooth acceleration of an electron by gravitational acceleration) with a Feynman diagram for a graviton causing acceleration by hitting an electron (see previous post for the mechanism and quantitative checked prediction of the strength of gravity). If you believe string theory, which uses spin-2 gravitons for ‘attraction’ (rather than pushing), you have to imagine the graviton not pushing rightwards to cause the electron to deflect, but somehow pulling from the right hand side: see this previous post for the maths of how the bogus (vacuous, non-predictive) spin-2 graviton idea works in the path integrals formulation of quantum gravity. (Basically, spin-1 gravitons push, while spin-2 gravitons suck. So if you want a checkable, predictive, real theory of quantum gravity that pushes forward, check out spin-1 gravitons. But if you merely want any old theory of quantum gravity that well and truly sucksyou can take your pick from the ‘landscape’ of 10500 stringy theories of mainstream sucking spin-2 gravitons.) In general relativity, an electron accelerates due to a continuous smooth curvature of spacetime, due to a spacetime ‘continuum’ (spacetime fabric).
In mainstream quantum gravity ideas (at least in the Feynman diagram for quantum gravity), an electron accelerates in a gravitational field because of quantized interactions with some sort of graviton radiation (the gravitons are presumed to interact with the mass-giving Higgs field bosons surrounding the electron core). As explained in the discussion of the stress-energy curvature in the previous post, in addition to the gravity mediators (gravitons) presumably being quantized rather than a continuous or continuum curved spacetime, there is the problem that the sources of fields such as discrete units of matter, come in quantized units in locations of spacetime. General relativity only produces smooth curvature (the acceleration curve in the left hand diagram of Fig. 1) by smoothing out the true discontinuous (atomic and particulate) nature of matter by the use of an averaged density to represent the ‘source’ of the gravitational field.
The curvature of the line in the Feynman diagram for general relativity is therefore due to the smoothness of the source of gravity spacetime, resulting from the way that the presumed source of curvature – the stress-energy tensor in general relativity – averages the discrete, quantized nature of mass-energy per unit volume of space. Quantum field theory is suggestive that the correct Feynman diagram for any interaction is not a continuous, smooth curve, but instead a number of steps due to discrete interactions of the field quanta with the charge (i.e., gravitational mass). However, the nature of the ‘gravitons’ has not been observed, so there are some uncertainties remaining about their nature. Fig. 1 (which was inspired – in part – by Fig. 3 in Lee Smolin’s Trouble with Physics) is designed to give a clear idea of what quantum gravity is about and how it is related to general relativity:
The previous post predicts gravity and cosmology correctly; the basic mechanism was published (by Electronics World) in October 1996, two years ahead of the discovery that there’s no gravitational retardation. More important, it predicts gravity quantitatively, and doesn’t use any ad hoc hypotheses, just experimentally validated facts as input. I’ve used that post to replace the earlier version of the gravity mechanism discussion here, here, etc., to improve clarity.
I can’t update the more permanent paper on the CERN document server here because as Tony Smith has pointed out, “… CERN’s Scientific Information Policy Board decided, at its meeting on the 8th October 2004, to close the EXT-series. …” The only way you can update a paper on the CERN document server is if it a mirror copy of one on arXiv; update the arXiv paper and CERN’s mirror copy will be updated. This is contrary to scientific ethics whereby the whole point of electronic archives is that corrections and updates should be permissible. Professor Jacques Distler, who works on string theory and is a member of arXiv’s advisory board, despite being warmly praised by me, still hasn’t even put Lunsford’s published paper on arXiv, which was censored by arXiv despite having been peer-reviewed and published.
Path integrals of quantum field theory
The path integral for the incorrect spin-2 idea was discussed at the earlier post here, while as stated the correct mechanism with accurate predictions confirming it, is at the post here. Let’s now examine the path integral formulation of quantum field theory in more depth. Before we go into the maths below, by way of background, Wiki has a useful history of path integrals, mentioning:
‘The path integral formulation was developed in 1948 by Richard Feynman. … This formulation has proved crucial to the subsequent development of theoretical physics, since it provided the basis for the grand synthesis of the 1970s called the renormalization group which unified quantum field theory with statistical mechanics. If we realize that the Schrödinger equation is essentially a diffusion equation with an imaginary diffusion constant, then the path integral is a method for the enumeration of random walks. For this reason path integrals had also been used in the study of Brownian motion and diffusion before they were introduced in quantum mechanics.’
As Fig. 1 shows, according to Feynman, ‘curvature’ is not real and general relativity is just an approximation: in reality, graviton exchange causes accelerations in little jumps. If you want to get general relativity out of quantum field theory, you have to sum over the histories or interaction graphs for lots of little discrete quantized interactions. The summation process is what we are about to describe mathematically. By way of introduction, we can remember the random walk statistics mentioned in the previous post. If a drunk takes n steps of approximately equal length x in random directions, he or she will travel an average of distance xn1/2 from the starting point, in a random direction! The reason why the average distance gone is proportional to the square-root of the number of steps is easily understood intuitively because it is due to diffusion theory. (If this was not the case, there would be no diffusion, because molecules hitting each other at random would just oscillate around a central point without any net movement.) This result is just a statistical average for a great many drunkard’s walks. You can derive it statistically, or you can simulate it on a computer, add up the mean distance gone after n steps for lots of random walks, and take the average. In other words, you take the path integral over all the different possibilities, and this allows you to work out what is most likely to occur.
Feynman applied this procedure to the principle of least action. One simple way to illustrate this is the discussion of how light reflects off a mirror. Classically, the angle of incidence is equal to the angle of reflection, which is the same as saying that light takes the quickest possible route when reflecting. If the angle of incidence were not equal to the angle of reflection, then light would obviously take longer to arrive after being deflected than it actually does (i.e., the sum of lengths of the two congruent sides in an isosceles triangle is smaller than the sum of lengths of two dissimilar sides for a trangle with the same altitude line perpendicular to the reflecting surface).
The fact that light classically seems always to go where the time taken is least is a specific instance of the more general principle of least action. Feynman explains this with path integrals in his book QED (Penguin, 1990). Physically, path integrals are the mathematical summation of all possibilities. Feynman crucially discovered that all possibilities have the same magnitude but that the phase or effective direction (argument of the the complex number) varies for different paths. Because each path is a vector, the differences in directions mean that the different histories will partly cancel each other out.
To get the probability of event y occurring, you first calculate the amplitude for that event. Then you calculate the path integral for all possible events including event y. Then you divide the first probability (that for just event y) into the path integral for all possibilities. The result of this division is the absolute probability of event y occurring in the probability space of all possible events! Easy.
Feynman found that amplitude for any given history is proportional to eiS/h-bar, and that the probability is proportional to the square of the modulus (positive value) of eiS/h-bar. Here, S is the action for the history under consideration.
What is pretty important to note is that, contrary to some popular hype by people who should know better (Dr John Gribbin being such an example of someone who won’t correct errors in his books when I email the errors), the particle doesn’t actually travel on all of the paths integrated over in a specific interaction! What happens is just one interaction, and one path. The other paths in the path integral are considered so that you can work out the probability of a given path occurring, out of all possibilities. (You can obviously do other things with path integrals as well, but this is one of the simplest things. For example, instead of calculating the probability of a given event history, you can use path integrals to identify the most probable event history, out of the infinite number of possible event histories. This is just a matter of applying simple calculus!)
However, the nature of Feynman’s path integral does allow a little interaction between nearby paths! This doesn’t happen with brownian diffusion! It is caused by the phase interference of nearby paths, as Feynman explains very carefully:
‘Light … ‘smells’ the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’ – R. P. Feynman, QED, Penguin, 1990, page 54.
The Wiki article explains:
‘In the limit of action that is large compared to Planck’s constant h-bar, the path integral is dominated by solutions which are stationary points of the action, since there the amplitudes of similar histories will tend to constructively interfere with one another. Conversely, for paths that are far from being stationary points of the action, the complex phase of the amplitude calculated according to postulate 3 will vary rapidly for similar paths, and amplitudes will tend to cancel. Therefore the important parts of the integral—the significant possibilities—in the limit of large action simply consist of solutions of the Euler-Lagrange equation, and classical mechanics is correctly recovered.
‘Action principles can seem puzzling to the student of physics because of their seemingly teleological quality: instead of predicting the future from initial conditions, one starts with a combination of initial conditions and final conditions and then finds the path in between, as if the system somehow knows where it’s going to go. The path integral is one way of understanding why this works. The system doesn’t have to know in advance where it’s going; the path integral simply calculates the probability amplitude for a given process, and the stationary points of the action mark neighborhoods of the space of histories for which quantum-mechanical interference will yield large probabilities.’
I think this last bit is badly written: interference is only possible in the ‘small core’ paths that the size of the photon or other particle takes. The paths which are not taken are not eliminated by inferference: they only occur in the path integral so that you know the absolute probability of a given path actually occurring.
Similarly, to calculate the probability of dice landing heads up, you need to know how many sides dice have. So on one throw the probability of one particular side landing facing upwards is 1/6 if there are 6 sides per die. But the fact that the number 6 goes into the calculation doesn’t mean that the dice actually arrive with every side facing up. Similarly, a photon doesn’t arrive along routes where there is perfect cancellation! No energy goes along such routes, so nothing at all physical travels along any of them. Those routes are only included in the calculation because they were possibilities, not because they were paths taken.
In some cases, such as the probability that a photon will be reflected from the front of a block of glass, other factors are involved. For the block of glass, as Feynman explains, Newton discovered that the probability of reflection depends on the thickness of the block of glass as measured in terms of the wavelength of the light being reflected. The mechanism here is very simple. Consider the glass before any photon even approaches it. A normal block of glass is full of electrons in motion and vibrating atoms. The thickness of the glass determines the number of wavelengths that can fit into the glass for any given wavelength of vibration. Some of the vibration frequencies will be cancelled out by interference. So the vibration frequencies of the electrons at the surface of the glass are modified in accordance to the thickness of the glass, even before the photon approaches the glass. This is why the exact thickness of the glass determines the precise probability of light of a given frequency being reflected. It is not determined when the photon hits the electron, because the vibration frequencies of the electron have already been determined by the interference of certain frequencies of vibration in the glass.
The natural frequencies of vibration in a block of glass depend on the size of the block of glass! These natural frequencies then determine the probability that a photon is reflected. So there is the two-step mechanism behind the dependency of photon reflection probability upon glass thickness. It’s extremely simple. Natural frequency effects are very easy to grasp: take a trip on an old school bus, and the windows rattle with substantial amplitude when the engine revolutions reach a particular frequency. Higher or lower engine frequencies produce less window rattle. The frequency where the windows shake the most is the natural frequency. (Obviously for glass reflecting photons, the oscillations we are dealing with are electron oscillations which are much smaller in amplitude and much higher in frequency, and in this case the natural frequencies are determined by the thickness of the glass.)
The exact way that the precise thickness of a sheet of glass affects the abilities of electrons on the surface to reflect light easily understood by reference to Schroedinger’s original idea of how stationary orbits arise with a wave picture of an electron. Schroedinger found that where an integer number of wavelengths of the electron fits into the orbit circumference, there is no interference. But when only a fractional number of wavelengths would fit into that distance, then interference would be caused. As a result, only quantized orbits were possible in that model, corresponding to Bohr’s quantum mechanics. In a sheet of glass, when an integer number of wavelengths of light for a particular frequency of oscillation fit into the thickness of the glass, there is no interference in vibrations at that specific frequency, so it is a natural frequency. However, when only a fractional number of wavelengths fit into the glass thickness, there is destructive interference in the oscillations. This influences whether the electrons are resonating in the right way to admit or reflect a photon of a given frequency. (There is also a random element involved, when considering the probability for individual photons chancing to interact with individual electrons on the surface of the glass in a particular way.)
Virtual pair-production can be included in path integrals by treating antimatter (such positrons) as matter (such as electrons) travelling backwards in time (this was one of the conveniences of Feynman diagrams which initially caused Feynman a lot of trouble, but it’s just a mathematical convenience for making calculations). For more mathematical detail on path integrals, see Richard Feynman and Albert Hibbs, Quantum Mechanics and Path Integrals, as well as excellent briefer introductions such as Christian Grosche, An Introduction into the Feynman Path Integral, and Richard MacKenzie, Path Integral Methods and ApplicationsFor other standard references, scroll down this page. For Feynman’s problems and hostility from Teller, Bohr, Dirac and Oppenheimer in 1948 to path integrals, see quotations in the comments of the previous post.
Feynman was extremely pragmatic. To him, what matters is the validity of the physical equations and their predictions, not the specific model used to get the equations and predictions. For example, Feynman said:
If you can get the right equations even from a false model, you have done something useful, as Maxwell did. However, you might still want to search for the correct model, as Feynman explained:
Feynman is here referring to the physics of the infinite series of Feynman diagrams with corresponding terms in the perturbative expansion for interactions with virtual particles in the vacuum in quantum field theory:
‘Given any quantum field theory, one can construct its perturbative expansion and (if the theory can be renormalised), for anything we want to calculate, this expansion will give us an infinite sequence of terms. Each of these terms has a graphical representation called a Feynman diagram, and these diagrams get more and more complicated as one goes to higher and higher order terms in the perturbative expansion. There will be some … ‘coupling constant’ … related to the strength of the interactions, and each time we go to the next higher order in the expansion, the terms pick up an extra factor of the coupling constant. For the expansion to be at all useful, the terms must get smaller and smaller fast enough … Whether or not this happens will depend on the value of the coupling constant.’ – P. Woit, Not Even Wrong, Jonathan Cape, London, 2006, p. 182.
This perturbative expansion is a simple example of the application of path integrals. There are several ways that the electron can move, each corresponding to a unique Feynman diagram. The electron can do along a direct path from spacetime location A to spacetime location B. Alternatively, it can be deflected by a virtual particle enroute, and travel by a slightly longer path. Another alternative is that if could be deflected by two virtual particles. There are, of course, an infinite number of other possibilities. Each has a unique Feynman diagram and to calculate the most probable outcome you need to average them all in accordance with Feynman’s rules.
For the case of calculating the magnetic moment of leptons, the original calculation came from Dirac and assumed in effect the simplest Feynman diagram situation: that the electron interacts with a virtual (gauge boson) ‘photon’ from a magnet in the simplest simple way possible. This is what conributes 98.85% of the total (average) magnetic moment of leptons, according to path integrals for lepton magnetic moments. The next Feynman diagram is the second highest contributor and accounts for over 1% of interactions. This correction is the situation evaluated by Schwinger in 1947 and is represented by a Feynman diagram in which a lepton emits a virtual photon before it interacts with the magnet. After interacting with the magnet, it re-absorbs the virtual photon it emitted earlier. This is odd because if an electron emits a virtual photon, it briefly (until the virtual photon is recaptured) loses energy. How, physically, can this Feynman diagram explain how the magnetic moment of the electron be increased by 0.116% as a result of losing the energy of a virtual photon for the duration of the interaction with a magnet? If this mechanism was the correct story, maybe you’d have a reduced magnetic moment result, not an increase? Since virtual photons mediate electromagnetic charge, you might expect them to reduce the charge/magnetism of the electromagnetism by being lost during an interaction. Obviously, the loss of a non-virtual photon from an electron has no effect on the charge energy at all, it merely decelerates the electron (so kinetic energy and mass are slightly reduced, not electromagnetic charge).
There are two possible explanations to this:
1) the Feynman diagram for Schwinger’s correction is physically correct. The emission of the virtual photon occurs in such a way that the electron gets briefly deflected towards the magnet for the duration of the interaction between electron and magnet. The reason why the magnetic moment of the electron is increased as a result of this is simply that the virtual ‘photon’ that is exchanged between the magnet and the electron is blue-shifted by the motion of the electron towards the magnet for the duration of the interaction. After the interaction, the electron re-captures the virtual ‘photon’ and is no-longer moving towards the magnet. The blue-shift is the opposite of red-shift. Whereas red-shift reduces the interaction strength between receding charges, blue-shift (due to the approach of charges) increases the interaction strength because the photons have an energy that is directly proportional to their frequency (E = hf). This mechanism may be correct, and needs further investigation.
2) The other possibility is that there is a pairing between the electron core and a virtual fermion in the vacuum around it which increases the magnetic moment by a factor which depends on the shielding factor of the field from the particle core. This mechanism was described in the previous post. It helped inspire the general concept for the mass model discussed in the previous post, which is independent of this magnetic moment mechanism, and makes checkable predictions of all observable lepton and hadron masses.
The relationship of leptons to quarks and the perturbative expansion
As mentioned in the previous post (and comments number 13, 14, 22, 24, 25, 26, 27, 28 and 31 of that post), the number one priority now is to develop the details of the lepton-quark relationship. The evidence that quarks are pairs or triads of confined leptons with some symmetry transformations was explained in detail in comment 13 to the previous post and is known as universality. This was first recognised when the lepton beta decay event
muon -> electron + electron antineutrino + muon neutrino
was found to have similar detailed properties to the quark beta decay event
neutron -> proton + electron + electron antineutrino
Nicola Cabibbo used such evidence that quarks are closely related to leptons (I’ve only given one of many examples above) to develop the concept of ‘weak universality, which involves a similarity in the weak interaction coupling strength between different generations of particles.’
As stated in comment 13 of the previous post, I’m interested in the relationship between electric charge Q, weak isospin charge T and weak hypercharge Y:
Q = T + Y/2.
Where Y = −1 for left-handed leptons (+1 for antileptons) and Y = +1/3 for left-handed quarks (−1/3 for antiquarks).
The minor symmetry transformations which occur when you confine leptons in pairs or triads to form “quarks” with strong (colour) charge and fractional apparent electric charge, are physically caused by the increased strength of the polarized vacuum, and by the ability of the pairs of short-ranged virtual particles in the field to move between the nearby individual leptons, mediating new short-ranged forces which would not occur if the leptons were isolated. The emergence of these new short ranged forces, which appear only when particles are in close proximity, is the cause of the new nuclear charges, and these charges add extra quantum numbers, explaining why the Pauli exclusion principle isn’t violated. (The Pauli exclusion simply says that in a confined system, each particle has a unique set of quantum numbers.) Peter Woit’s Not Even Wrong summarises what is known in Figure 7.1 on page 93 of Not Even Wrong:
‘The picture shows the SU(3) x SU(2) x U(1) transformation properties of the first three generations of fermions in the standard model (the other two generations behave the same way).
‘Under SU(3), the quarks are triplets and the leptons are invariant.
‘Under SU(2), the [left-handed] particles in the middle row are doublets (and are left-handed Weyl-spinors under Lorentz transformations), the other [right-handed] particles are invariant (and are right-handed Weyl-spinors under Lorentz transformations).
‘Under U(1), the transformation properties of each particle is given by its weak hypercharge Y.’
This makes it easier to understand: the QCD colour force of SU(3) controls triplets of particles (’quarks’), whereas SU(2) controls doublet’s of particles (’quarks’).
But the key thing is that the hypercharge Y is different for differently handed quarks of the same type: a right-handed downquark (electric charge -1/3) has a weak hypercharge of -2/3, while a left-handed downquark (same electric charge as the right-handed one, -1/3), has a different weak hypercharge: 1/3 instead of -2/3!
The issue of the fine detail in the relationship of leptons and quarks, how the transformation occurs physically and all the details you can predict from the new model suggested in the previous post, is very interesting and, as stated, is the number one priority.
For a start, to study the transformation of a lepton into a quark, we will consider the conversion of electrons into downquarks. First, the conversion of a left-handed electron into a left-handed downquark will be considered, because the weak isospin charge is the same for each (T = -1/2):
eL -> dL
The left-handed electron, eL, has a weak hypercharge of Y = -1 and the left-handed downquark, dL, has a weak hypercharge of Y = +1/3. Therefore, this transformation incurs a fall in observable electric charge by a factor of 3 and an accompanying increase in weak hypercharge by +4/3 units (from -1 to +1/3).
Now, if the vacuum shielding mechanism suggested has any heuristic validity, the right-handed electron should transform into a right-handed downquark by way of a similar fall in electric charge by a factor of 3 and accompanying increase in weak hypercharge by +4/3 units:
eR -> dR
The weak isospin charges are the same for right-handed electrons and right-handed downquarks (T = 0 in each case).
The transformation of a right-handed electron to right-handed downquark involves the same reduction in electric charge by a factor of 3 as for left-handed electrons, while the weak hypercharge changes from Y = -2 to Y = -2/3. This means that the weak hypercharge increases by +4/3 units, just the same amount as occurred with the transformation of a left-handed electron to a left-handed downquark. So there is a consistency to this model: the shielding of a given amount of electric charge by the polarized vacuum causes a consistent increase in the weak hypercharge.
If we ignore for the moment the possibility that antimatter leptons may get transformed into upquarks and just consider matter, then the symmetry transformations required to change right-handed neutrinos into right-handed upquarks, and left-handed neutrions into left-handed upquarks are:
vL -> uL
vR -> uR
The first transformation involves a left-handed neutrino, vL, with Y = -1, Q = 0, and T = 1/2, becoming a left-handed upquark, uL, with Y = 1/3, Q = 2/3, and T = 1/2. We notice that Y gains 4/3 in the transformation, while Q gains 2/3.
The second transformation involves a right-handed neutrino with Y = 0, Q = 0 and T = o becoming a right-handed upquark with Y = 4/3, Q = 2/3 and T = 0. We can immediately see that the transformation has again resulted in Y gaining 4/3 while Q gains 2/3. Hence, the concept that a given change in electric charge is accompanied by a given change in hypercharge remains valid. So we have accounted for the conversion of the four leptons in one generation of particle physics (two types of handed electrons and two types of handed neutrinos) into the four quarks in the same generation of particle physics (left and right handed versions of two quark flavors).
These transformations are obviously not normal reactions at low energy. The first two make checkable, falsifiable predictions about unification to replace supersymmetry speculation about the unification of running couplings, the relative charges of the electromagnetic, weak and strong forces as a function of either collision energy (e.g., electromagnetic charge increases at higher energy, while strong charge falls) or distance (e.g., electromagnetic charge increases at small distances, while strong charge falls).
If we review the symmetry transformations suggested for a generation of leptons into a generation of quarks,
eL -> dL
eR -> dR
vL -> uL
vR -> uR
it is clear that the last two reactions are in difficulty, because the conversion of neutrinos into upquarks (in this example of a generation of quarks) is a potential problem for the suggested physical mechanism in the previous (and earlier) posts. The physical mechanism for the first two of the four transformations is relatively straightforward to picture: try to collide leptons at enormous energy and the overlap of the polarized vacuum veils of polarizable fermions should shield some of the long-range (observable low energy) electric charge, with this shielded energy is used instead in short range weak hypercharge mediated by weak gauge bosons, and colour charges for the strong force.
Because we know exactly how much energy is ‘lost’ from the electric charge in the first two transformations due to the increased shared polarized vacuum shield, we can quantitatively check this physical mechanism by setting this lost energy equal to the energy gained in the weak force and seeing if the predictions are accurate. This mechanism might not apply directly to the last two transformations, since neutrinos do not carry a net electric charge. It is also necessary to investigate the possibilities for the transformation of positrons into upquarks. This issue of why there is little antimatter might be resolved if positrons were converted into upquarks at high energy in the big bang by the mechanism suggested for the first two transformations.
However, the polarized vacuum shielding mechanism might still apply in some circumstances to neutral particles, depending on the geometry. Neutrinos may be electrically neutral as observed at low energy or large distances, while actually carrying equal and opposite electric charge. (Similarly, atoms often appear to be neutral, but if we smash them to pieces we get observable electric charges arise. The apparent electrical neutrality of atoms is a masking effect of the fact that atoms usually carry equal positive and negative charge, which cancel as seen from a distance. A photon of light similarly carries positive electric field and negative electric field energy in equal quantities; the two cancel out overall, but the electromagnetic fields of the photon can interact with charges.
Charge is only manifested by way of the field created by a charge; since nobody has ever seen the core of a charged particle, only the field. A confined field of a given charge is therefore indistinguishable from a charge. The only reason why an electron appears to be a negative charge is because it has a negative electric field around it. As shown in Fig. 5 of the previous post, there is a modification necessary to the U(1) symmetry of the standard model of particle physics: negative gauge bosons to mediate the fields around negative charges, and positive gauge bosons to mediate the fields around positive charges.
So a ‘neutral’ particle which is neutral because it contains of equal amounts of positive and negative electric field, may be able to induce electric polarization of the vacuum for the short ranged (uncancelled) electric field. The range of this effect is obviously limited to the distance between the centrel of the positive part of the particle and the negative part of the particle. (In the case of a photon for example, this distance is the wavelength.)
If we replace the existing electroweak SU(2)xU(1) symmetry by SU(2)xSU(2), maybe with each SU(2) having a different handedness, then we get four charged bosons (two charged massive bosons for the weak force, and two charged massless bosons for electromagnetism) and two neutral bosons: a massless gravity mediating gauge boson, and a massive weak neutral-current producing gauge boson.
Let’s try the transformation of a positron into an upquark. This has two major advantages over the idea that neutrinos are transformed into upquarks. First, it explains why we don’t observe much antimatter in nature (tiny amounts arise from radioactive decays involving positron emission, but it quickly annihilates with matter into gamma rays). In the big bang, if nature was initially symmetric, you would expect as much matter as antimatter. The transformation of free positrons into confined upquarks would sort out this problem. Most of the universe is hydrogen, consisting of a proton containing two upquarks and a downquark, plus an orbital electron. If the upquarks come come from a transformation of positrons while downquarks come from a transformation of electrons, the matter-antimatter balance is resolved.
Secondly, the transformation of positrons to upquarks has a simple mechanism by vacuum polarization shielding of the electric charge, causing the electric charge of the positron to drop from +1 unit for a positron to +2/3 units for upquarks. This occurs because you get two positive upquarks and one downquark in a proton. The transformation is
e+L -> uL
The positron on the left hand side has Y = +1, Q = +1 and T = +1/2. The upquark on the right hand side has Y = +1/3, Q = +2/3 and T = +1/2. Hence, there is an decrease of Y by 2/3, while Q decreases by 1/3. Hence the amount of change of Y is twice that of Q. This is impressively identical to the the situation in the transformation of electrons into downquarks, where an increase of Q by 2/3 units is accompanied by an increase of Y by twice 2/3, i.e., by 4/3, for the transformation eL -> dL
There are only two ways that quarks can group: in pairs and in triplets or triads. The pairs are of quarks sharing the same polarized vacuum are known as mesons, and mesons are the SU(2) symmetry pairs of left-handed quark and left-handed anti-quark, which both experience the weak nuclear force (no right-handed particle can participate in the weak nuclear force, because the right handed neutrino has zero weak hypercharge). The SU(3) symmetry triplets of quarks are called baryons.
Because only left-handed particles experience the weak force (i.e., parity is broken), it is vital to explain why this is so. This arises from the way the vector bosons gain mass. In the basic standard model, everything is massless. Mass is added to the standard model by a separate scalar field (such as that which is speculatively proposed by Philip Anderson and Peter Higgs and called the Higgs field), which gives all the massive particles (including the weak force vector bosons) their mass. The quanta for the scalar mass field are named ‘Higgs bosons’ but these have never been officially observed, and mainstream speculations do not make predict Higgs boson mass unambiguously.
The model for masses in the previous post predicts composite (meson and baryon) particle masses to be due to an integer number of 91 GeV building blocks of mass which couple weakly due to the shielding factor due to the polarized vacuum around a fermion. The 91 GeV energy equivalent to the rest mass of the uncharged neutral weak gauge boson, the Z.
The SU(3), SU(2) and U(1) gauge symmetries of the standard model describe triplets (baryons), doublets (mesons) and single particle cores (leptons), dominated by strong, weak and electromagnetic interactions, respectively. The problem is located in the electroweak SU(2)xU(1) symmetry. Most of the papers and books on gauge symmetry focus on the technical details of the mathematical machinery, and simple mechanisms are looked at askance (as is generally the case in quantum mechanics and general relativity). So you end up learning say, how to drive a car without knowing how the engine works, or you learn how the engine works without any knowledge of the territory which would enable you to plan a useful journey. This is the way some complex mathematical physics is traditionally taught, mainly to get away from useless speculations: Feynman’s analogy of the chess game is fairly good. (Deduce some of the rules of the game by watching the game being played, and use these rules to make some accurate predictions about what may happen; without having the complete understanding necessary for confident explanation of what the game is about. Then make do by teaching the better known predictive rules, which are technical and accurate, but don’t always convey a complete understanding of the big picture.)
A serious problem with the U(1) symmetry is that you can’t really ever get single leptons in nature. They all arise naturally from pair production, so they usually arrive in doublets, contradicting U(1); examples: in beta decay, you get a beta particle and an antineutrino, while in pair production you may get a positron and an electron.
This is part of the reason why SU(2) deals with leptons in the model proposed in the previous post. Whereas pairs of left-handed quarks are confined in close proximity in mesons, a lepton-antilepton pair is not confined in a small space, but it is still a type of doublet and can be treated as such by SU(2) using massless gauge bosons (take the masses away from the Z, W+ and W- weak bosons, and you are left with a massless Z boson that mediates gravity, and massless W+ and W- bosons which mediate electromagnetic forces). Because a version of SU(2) with massless gauge bosons has infinite range inverse-square law fields, it is ideal for describing the widely separated lepton-antilepton pairs created by pair production, just as SU(2) with massive guage bosons is ideal for describing the short range weak force in left-handed quark-antiquark pairs (mesons).
The electroweak chiral symmetry arises because only left-handed particles can interact with massive SU(2) gauge bosons (the weak force), while all particles can interact with massless SU(2) gauge bosons (gravity and electromagnetism). The reason why this is the case is down to the nature of the way mass is given to SU(2) gauge bosons by a mass-giving Higgs-type field. Presumably the combined Higgs boson when coupled with a massless weak gauge boson gives a composite particle which only interacts with left-handed particles, while the nature of the massless weak gauge bosons is that in the absence of Higgs bosons they can interact equally with left and right handed particles.
To summarise, quarks are probably electron and antielectrons (positrons) with the symmetry transformation modifications you get from close confinement of electrons against the exclusion principle (e.g., such electrons acquire new charges and short range interactions).
Downquarks are electrons trapped in mesons (pairs of quarks containing quark-antiquark, bound together by the SU(2) weak nuclear force, so they have short lifetimes and under beta radioactive decay) or baryons, which are triplets of quarks bound by the SU(3) strong nuclear force. The confinement of electrons in a small reduces their electric charge because they are all close enough in the pair or triplet to share the same overlapping polarized vacuum which shields part of the electric field. Because this shielding effect is boosted, the electron charge per electron observed at long range is reduced to a fraction. The idealistic model is 3 electrons confined in close proximity, giving a 3 times strong polarized vacuum, which reduces the observable charge per electron by a factor of 3, giving the e/3 downquark charge. This is a bit too simplistic of course because in reality you get mainly stable combinations like protons (2 upquark and 1 downquark). The energy lost from the electric charge, due to the absorption in the polarized vacuum, powers short-ranged nuclear forces which bind the quarks in mesons and hadrons together.
Upquarks would seem to be trapped positrons. This is neat because most of the universe is hydrogen, with one electron in orbit and 2 upquarks plus 1 downquark in the proton nucleus. So one complete hydrogen atom is formed by 2 electrons and 2 positrons. This explains the absence of antimatter in the universe: the positrons are all here, but trapped in nuclei as upquarks. Only particles with left-handed Weyl spin undergo weak force interactions.
Possibly the correct electroweak-gravity symmetry group is SU(2)L x SU(2)R, where SU(2)L is a left-handed symmetry and SU(2)R is a right handed one. The left-handed version couples to massive bosons which give mass to particles and vector bosons, creating all the massive particles and weak vector bosons. The right handed version presumably does not couple to massive bosons. The result here is that the right handed version, SU(2)R, produces only mass-less particles, giving the gauge bosons needed for long-range electromagnetic and gravitational forces. If that works in detail, it is a simplification of the SU(2)xU(1) electroweak model, which should make the role of the mass-giving field clearer, and predictions easier.
The mainstream SU(2)xU(1) model requires a symmetry-breaking Higgs field which works by giving mass to weak gauge bosons only below a particular energy or beyond a particular distance from a particle core. The weak gauge bosons are supposed to be mass-less above that energy, where electroweak symmetry exists; electroweak symmetry breaking is supposed to occur below the Higgs expectation energy due to the fact that 3 weak gauge bosons acquire mass at low energy, while photons don’t acquire mass at low energy.
This SU(2)xU(1) model mimics a lot of correct physics, without being the correct electroweak unification. How far has the idea that weak gauge bosons lose mass above the Higgs expectation value been checked (I don’t think it has been checked at all yet)? Presumably this is linked to ongoing efforts to see evidence for a Higgs boson. The electroweak theory correctly unifies the weak force (dealing with neutrinos, beta decay and the behaviour of mesons) with Maxwell’s equations at low energy and the electroweak unification SU(2)xU(1) predicted the W and Z massive weak gauge bosons detected at CERN in 1983. However, the existence of three massive weak gauge bosons is the same in the proposed replacement for SU(2)xU(1). I think that the suggested replacement of U(1) by another SU(2) makes quite a lot of changes to the untested parts of the standard model (in particular the Higgs mechanism), besides the obvious benefits of introducing gravity and causal electromagnetism.
Spherical symmetry of Hubble recession
I’d like to thank Bee and others at the Backreaction blog for patiently explaining to me that a statement that radial distance elements are equal for the Hubble recession in all directions around us,
H = dv/dr = dv/dx = dv/dy = dv/dz
t(age of universe), 1/H = dr/dv = dx/dv = dy /dv = dz/dv
dv/H = dr = dx = dy = dx
for spherically symmetrical recession of stars around us (in directions x, y, z, where r is the general radial direction that can point any way), appears superficially to be totally ‘wrong’ to people who are only unaccustomed to cosmology where the elementary equations for spherical geometry and metrics in non-symmetric spatial dimensions don’t apply. Hopefully, ‘critics’ will grasp the point that equation A does not disprove equation B just because you have seen equation A in some textbook, and not equation B.
For example, some people repeatedly and falsely claim that H = dv/dr = dv/dx = dv/dy = dv/dz and the resulting equality dr = dx = dy = dx is total rubbish, and is ‘disproved’ by the existence of metrics and non-symmetrical spherical geometrical equations. They ignore all explanations that this equality of gradient elements has nothing to do with metrics or spherical geometry, and is due to the spherical symmetry of the cosmic expansion we observer around us.
Another way to look at H = dv/dr = dv/dx = dv/dy = dv/dz is to remember that 1/H is a way to measure the age of the universe. If the universe were at critical density and being gravitationally slowed down with no cosmological constant to offset this gravity effect by providing repulsive long range force and an outward acceleration to cancel out the gravitational inward deceleration assumed by the mainstream (i.e., the belief until 1998), then the age of the universe would be (2/3)/H where 2/3 is the compensation factor for gravitational retardation.
However, since 1998 there has been good evidence that gravity is not slowing down the expansion; instead there is either something opposing gravity by causing repulsion at immense distance scales and outward acceleration (so-called ‘dark energy’ giving a small positive cosmological constant), or else there is a partial lack of gravity at long distances due to graviton redshift and/or the geometry of a quantum gravity mechanism (depending on whether you are assuming spin-2 gravitons or not), which has substantially more predictive and less ad hoc. since it was predicted via Electronics World Oct. 1996, years before being confirmed by observation (see comment 11 on previous post).
Therefore, let’s use 1/H as the age of the universe, time! Then we find:
This proves that dr/dv = dx/dv = dy/dv = dz/dv.
Now multiply this out by dv, and what do you get? You get:
dr = dx = dy = dz.
As Fig. 2 shows, it is a fact that the Hubble parameter can be expressed as H = dv/dr = dv/dx = dv/dy = dv/dz, where the equality of numerators means that the denominators are similarly equal: dr = dx = dy = dx. This is fact, not an opinion or guess.
Fig 2 - why dr = dx = dy = dx in the Hubble law v/r = H or dv/dr = H
Fig. 2: Illustration of the reason why the Hubble law H = dv/dr = dv/dx = dv/dy = dv/dz, where because of the isotropy (i.e. the Hubble law is the same in every direction we look, as far as observational evidence can tell), the numerators in the fractions are all equal to dv so the denominators are all equal to each other too: dr = dx = dy = dx. Beware everyone, this has nothing whatsoever to do with metrics, with general relativity, or with the general case in spherical geometry (where the origin of coordinates need not in general be the centre of the spherical symmetry)!
So if your textbook has a formula which ‘contradicts’ dr = dx = dy = dx or if you think that dr = dx = dy = dx should in your opinion be replaced by a metric with the squares of line elements all added up, or with a general formula for spherical geometry which applies to situations where the recession would be vary with directions, then you are wrong. As one commentator on this blog has said (I don’t agree with most of it), it is true that new ideas which have not been investigated before often look ‘silly’. People who do not check the physics and instead just pick out formulae, misunderstand them, and then ridicule them, are not “critics”. They are not criticising the work, instead they are criticising their own misunderstandings. So any ridicule and character assassinations resulting should be taken with a large pinch of salt. It’s best to try to see the funny side when this occurs!
One of the very interesting things about dr = dx = dy = dx is what you get for time dimensions because the age of the universe (if there is no gravitational deceleration, as was shown to be the case in 1998) is 1/H, and because we look back in time with increasing distance according to r = x = y = z = ct, it follows that there are equivalent time-like dimensions for each of the spatial dimensions. This makes spacetime easier to understand and allows a new unification scheme! The expanding universe has three orthagonal expanding time-like dimensions (we usually refer to astronomical dimensions in time units like ‘lightyears’ anyway, since we are observing the past with increasing distance, due to the travel time of light) in addition to three spacetime dimensions describing matter. Surely this contradicts general relativity? No, because all three time dimensions are usually equal, and so can be represented by a single time element, dt, or its square. To do this, we take dr = dx = dy = dz and convert them all into time-like equivalents by dividing each distance element by c, giving:
(dr)/c = (dx)/c = (dy)/c = (dz)/c
which can be written as:
dtr = dtx = dty = dtz
So, because the age of the universe (ascertained by the Hubble parameter) is the same in all directions, all the time dimensions are equal! This is why we only need one time to describe the expansion of the universe. If the Hubble expansion rate was found to be different in directions x, y and x, then the age of the universe would appear to be different in different directions. Fortunately, the age of the universe derived from the Hubble recession seems to be the same (within observational error bars) in all directions: time appears to be isotropic! This is quite a surprising result as some hostility to this new idea from traditionalists shows.
But the three time dimensions which are usually hidden by this isotropy are vitally important! Replacing the Kaluza-Klein theory, Lunsford has a 6-dimensional unification of electrodynamics and gravitation which has 3 time-like dimensions and appears to be what we need. It was censored off arXiv after being published in a peer-reviewed physics journal, “Gravitation and Electrodynamics over SO(3,3)”, International Journal of Theoretical Physics, Volume 43, Number 1 / January, 2004, Pages 161-177, which can be downloaded here. The mass-energy (i.e., matter and radiation) has 3 spacetime which are different from the 3 cosmological spacetime dimensions: cosmological spacetime dimensions are expanding, while the 3 spacetime dimensions are bound together but are contractable in general relativity. For example, in general relativity the Earth’s radius is contracted by the amount 1.5 millimetres.
In addition, as was shown in detail in the previous post, this sorts out ‘dark energy’ and predicts the strength of gravity accurately within experimental data error bars, because when we rewrite the Hubble recession in terms of time rather than distance, we get acceleration which by Newton’s 2nd empirical law of motion (F = ma) implies an outward force of receding matter, which in turn implies by Newton’s 3rd empirical law of motion an inward reaction force which – it turns out – is the mechanism behind gravity:
‘To find out what the acceleration is, we remember that velocity is defined as v = dR/dt, and this rearranges to give dt = dR/v, which can be substituted into the definition of acceleration, a = dv/dt, giving a = dv/(dR/v) = v.dv/dR, into which we can insert Hubble’s empirical law v = HR, giving a = HR.d(HR)/dR = H2R.’
– Herman Minkowski, 1908.
Deriving the relationship between the FitzGerald contraction and the gravitational contraction
Feynman finds that whereas lengths contract in the direction of motion at velocity v by the ratio (1 – v2/c2)1/2, gravity contracts lengths by the amount (1/3)MG/c2 = 1.5 mm for the contraction of Earth’s radius by gravity.
It is of interest that this result can be obtained simply, throwing light on the relationship between the equivalence of mass and energy in ‘special relativity’ (which is at best just an approximation) and the equivalence of inertial mass and gravitational mass in general relativity.
To start with, recall Dr Love’s derivation of Kepler’s law from the equivalence of the kinetic energy of a planet to its gravitational potential energy, given in a previous post.
This is very simple. If a body’s average kinetic energy in space (outwide the atmosphere) is such that it has just over the escape velocity, it will eventually escape and will therefore be unable to orbit endlessly. If it has just under that velocity, it will eventually fall back to Earth and so it will not orbit endlessly, just as is the case if the average velocity is too high. Like Goldilocks and the porridge, it is very fussy.
The average orbital velocity must exactly match the escape velocity – and be neither more nor less than the escape velocity – in order to achieve a stable orbit.
Dr Love points out the consequences: a body in orbit must have an average velocity equal to escape velocity v = (2GM/r1/2 which implies that its kinetic energy must be equal to its gravitational potential energy:
kinetic energy, E = (1/2) mv 2 = (1/2) m((2GM/r1/2 )2 = mMG/r.
This permits him to derive Kepler’s law. It is also very important because it explains the relationship for stability of orbits:
average kinetic energy = gravitational potential energy
Einstein’s equivalence of inertial and gravitational mass in E = mc2 then allows us to use this equivalence of inertial kinetic energy and gravitational potential energy derive the equivalence principle of general relativity, which states that the inertial mass is equal to the gravitational mass, at least for orbiting bodies. Another physically justified argument is that gravitational potential is the gravity energy that would be released in the case of collapse. If you allowed the object to fall and therefore pick up that gravitational potential energy, the latter energy would be converted into kinetic energy of the object. This is why the two energies are equivalent. It’s a rigorous argument!
Now test it further. Take the FitzGerald-Lorentz contraction of length due to inertial motion at velocity, where objects are compressed by the ratio (1 – v2/c2)1/2, using equivalence of average kinetic energy to gravitational potential energy, you can place the escape velocity, v = (2GM/r1/2 into the contraction formula, and expand the result to two terms using the binomial expansion. You find that the radius of a gravitational mass would be reduced by the amount GM/c2 = 4.5 mm for Earth’s radius which is three times as big as Feynman’s formula for gravitational compression of Earth’s radius. The factor of three comes from the fact that the FitzGerald-Lorentz contraction is in one dimension only (direction of motion), while the gravitational field lines radiate in three dimensions, so the same amount of contraction is spread over three times as many dimensions, giving a reduction in radius by (1/3)GM/c2 = 1.5 mm! (There is also a rigorous mathematical discussion of this on the page here if you have the time to scroll down and find it.)
Unusually, Feynman makes a confused mess of this effect in his relevant volume of Lectures on Physics, c42 p6, where correctly he gives his equation 42.3 for excess radius being equal to predicted radius minus measured radius (i.e., he claims that the predicted radius is the bigger one in the equation) but then on the same page in the text falsely and confusingly writes: ‘… actual radius exceeded the predicted radius …’ (i.e., he claims in the text that the predicted radius is the smaller).
Professor Jacques Distler’s philosophical and mathematical genius
‘A theorem is only as good as the assumptions underlying it. … particularly in more speculative subject, like Quantum Gravity, it’s simply a mistake to think that greater rigour can substitute for physical input. The idea that somehow, by formulating things very precisely and proving rigourous theorems, correct physics will eventually emerge simply misconstrues the role of rigour in Physics.’
– Professor Jacques Distler, Musings blog post on the Role of Rigour.
Jacques also summarises the issues for theoretical physics clearly in a comment there:
1. ‘There’s the issue of the theorem itself, and whether the assumptions that went into it are physically-justified.
2. ‘There’s the issue of a certain style of doing Physics which values proving theorems over other ways of arriving at physical knowledge.
3. ‘There’s the rhetorical use to which the (alleged) theorem is put, in arguing for or against some particular approach. In particular, there’s the unreflective notion that a theorem trumps any other sort of evidence.’ |
9a4b6b433c0b0234 | From Wikipedia, the free encyclopedia
(Redirected from Quasi-particle)
Jump to navigation Jump to search
In physics, quasiparticles and collective excitations (which are closely related) are emergent phenomena that occur when a microscopically complicated system such as a solid behaves as if it contained different weakly interacting particles in vacuum. For example, as an electron travels through a semiconductor, its motion is disturbed in a complex way by its interactions with other electrons and with atomic nuclei. The electron behaves as though it has a different effective mass travelling unperturbed in vacuum. Such an electron is called an electron quasiparticle.[1] In another example, the aggregate motion of electrons in the valence band of a semiconductor or a hole band in a metal[2] behave as though the material instead contained positively charged quasiparticles called electron holes. Other quasiparticles or collective excitations include the phonon (a particle derived from the vibrations of atoms in a solid), the plasmons (a particle derived from plasma oscillation), and many others.
These particles are typically called quasiparticles if they are related to fermions, and called collective excitations if they are related to bosons,[1] although the precise distinction is not universally agreed upon.[3] Thus, electrons and electron holes (fermions) are typically called quasiparticles, while phonons and plasmons (bosons) are typically called collective excitations.
The quasiparticle concept is important in condensed matter physics because it can simplify the many-body problem in quantum mechanics. The theory of quasiparticles was developed by the Soviet physicist Lev Landau in the 1930s.[4][5]
General introduction[edit]
Solids are made of only three kinds of particles: electrons, protons, and neutrons. Quasiparticles are none of these; instead, each of them is an emergent phenomenon that occurs inside the solid. Therefore, while it is quite possible to have a single particle (electron or proton or neutron) floating in space, a quasiparticle can only exist inside interacting many-particle systems (primarily solids).
Motion in a solid is extremely complicated: Each electron and proton is pushed and pulled (by Coulomb's law) by all the other electrons and protons in the solid (which may themselves be in motion). It is these strong interactions that make it very difficult to predict and understand the behavior of solids (see many-body problem). On the other hand, the motion of a non-interacting classical particle is relatively simple; it would move in a straight line at constant velocity. This is the motivation for the concept of quasiparticles: The complicated motion of the real particles in a solid can be mathematically transformed into the much simpler motion of imagined quasiparticles, which behave more like non-interacting particles.
In summary, quasiparticles are a mathematical tool for simplifying the description of solids.
Relation to many-body quantum mechanics[edit]
Any system, no matter how complicated, has a ground state along with an infinite series of higher-energy excited states.
The principal motivation for quasiparticles is that it is almost impossible to directly describe every particle in a macroscopic system. For example, a barely-visible (0.1mm) grain of sand contains around 1017 nuclei and 1018 electrons. Each of these attracts or repels every other by Coulomb's law. In principle, the Schrödinger equation predicts exactly how this system will behave. But the Schrödinger equation in this case is a partial differential equation (PDE) on a 3×1018-dimensional vector space—one dimension for each coordinate (x,y,z) of each particle. Directly and straightforwardly trying to solve such a PDE is impossible in practice. Solving a PDE on a 2-dimensional space is typically much harder than solving a PDE on a 1-dimensional space (whether analytically or numerically); solving a PDE on a 3-dimensional space is significantly harder still; and thus solving a PDE on a 3×1018-dimensional space is quite impossible by straightforward methods.
One simplifying factor is that the system as a whole, like any quantum system, has a ground state and various excited states with higher and higher energy above the ground state. In many contexts, only the "low-lying" excited states, with energy reasonably close to the ground state, are relevant. This occurs because of the Boltzmann distribution, which implies that very-high-energy thermal fluctuations are unlikely to occur at any given temperature.
Quasiparticles and collective excitations are a type of low-lying excited state. For example, a crystal at absolute zero is in the ground state, but if one phonon is added to the crystal (in other words, if the crystal is made to vibrate slightly at a particular frequency) then the crystal is now in a low-lying excited state. The single phonon is called an elementary excitation. More generally, low-lying excited states may contain any number of elementary excitations (for example, many phonons, along with other quasiparticles and collective excitations).[6]
When the material is characterized as having "several elementary excitations", this statement presupposes that the different excitations can be combined together. In other words, it presupposes that the excitations can coexist simultaneously and independently. This is never exactly true. For example, a solid with two identical phonons does not have exactly twice the excitation energy of a solid with just one phonon, because the crystal vibration is slightly anharmonic. However, in many materials, the elementary excitations are very close to being independent. Therefore, as a starting point, they are treated as free, independent entities, and then corrections are included via interactions between the elementary excitations, such as "phonon-phonon scattering".
Therefore, using quasiparticles / collective excitations, instead of analyzing 1018 particles, one needs to deal with only a handful of somewhat-independent elementary excitations. It is, therefore, a very effective approach to simplify the many-body problem in quantum mechanics. This approach is not useful for all systems, however: In strongly correlated materials, the elementary excitations are so far from being independent that it is not even useful as a starting point to treat them as independent.
Distinction between quasiparticles and collective excitations[edit]
Usually, an elementary excitation is called a "quasiparticle" if it is a fermion and a "collective excitation" if it is a boson.[1] However, the precise distinction is not universally agreed upon.[3]
There is a difference in the way that quasiparticles and collective excitations are intuitively envisioned.[3] A quasiparticle is usually thought of as being like a dressed particle: it is built around a real particle at its "core", but the behavior of the particle is affected by the environment. A standard example is the "electron quasiparticle": an electron in a crystal behaves as if it had an effective mass which differs from its real mass. On the other hand, a collective excitation is usually imagined to be a reflection of the aggregate behavior of the system, with no single real particle at its "core". A standard example is the phonon, which characterizes the vibrational motion of every atom in the crystal.
However, these two visualizations leave some ambiguity. For example, a magnon in a ferromagnet can be considered in one of two perfectly equivalent ways: (a) as a mobile defect (a misdirected spin) in a perfect alignment of magnetic moments or (b) as a quantum of a collective spin wave that involves the precession of many spins. In the first case, the magnon is envisioned as a quasiparticle, in the second case, as a collective excitation. However, both (a) and (b) are equivalent and correct descriptions. As this example shows, the intuitive distinction between a quasiparticle and a collective excitation is not particularly important or fundamental.
The problems arising from the collective nature of quasiparticles have also been discussed within the philosophy of science, notably in relation to the identity conditions of quasiparticles and whether they should be considered "real" by the standards of, for example, entity realism.[7][8]
Effect on bulk properties[edit]
By investigating the properties of individual quasiparticles, it is possible to obtain a great deal of information about low-energy systems, including the flow properties and heat capacity.
In the heat capacity example, a crystal can store energy by forming phonons, and/or forming excitons, and/or forming plasmons, etc. Each of these is a separate contribution to the overall heat capacity.
The idea of quasiparticles originated in Lev Landau's theory of Fermi liquids, which was originally invented for studying liquid helium-3. For these systems a strong similarity exists between the notion of quasiparticle and dressed particles in quantum field theory. The dynamics of Landau's theory is defined by a kinetic equation of the mean-field type. A similar equation, the Vlasov equation, is valid for a plasma in the so-called plasma approximation. In the plasma approximation, charged particles are considered to be moving in the electromagnetic field collectively generated by all other particles, and hard collisions between the charged particles are neglected. When a kinetic equation of the mean-field type is a valid first-order description of a system, second-order corrections determine the entropy production, and generally take the form of a Boltzmann-type collision term, in which figure only "far collisions" between virtual particles. In other words, every type of mean-field kinetic equation, and in fact every mean-field theory, involves a quasiparticle concept.
Examples of quasiparticles and collective excitations[edit]
This section contains examples of quasiparticles and collective excitations. The first subsection below contains common ones that occur in a wide variety of materials under ordinary conditions; the second subsection contains examples that arise only in special contexts.
More common examples[edit]
• In solids, an electron quasiparticle is an electron as affected by the other forces and interactions in the solid. The electron quasiparticle has the same charge and spin as a "normal" (elementary particle) electron, and like a normal electron, it is a fermion. However, its mass can differ substantially from that of a normal electron; see the article effective mass.[1] Its electric field is also modified, as a result of electric field screening. In many other respects, especially in metals under ordinary conditions, these so-called Landau quasiparticles[citation needed] closely resemble familiar electrons; as Crommie's "quantum corral" showed, an STM can clearly image their interference upon scattering.
• A hole is a quasiparticle consisting of the lack of an electron in a state; it is most commonly used in the context of empty states in the valence band of a semiconductor.[1] A hole has the opposite charge of an electron.
• A phonon is a collective excitation associated with the vibration of atoms in a rigid crystal structure. It is a quantum of a sound wave.
• A magnon is a collective excitation[1] associated with the electrons' spin structure in a crystal lattice. It is a quantum of a spin wave.
• In materials, a photon quasiparticle is a photon as affected by its interactions with the material. In particular, the photon quasiparticle has a modified relation between wavelength and energy (dispersion relation), as described by the material's index of refraction. It may also be termed a polariton, especially near a resonance of the material. For example, an exciton-polariton is a superposition of an exciton and a photon; a phonon-polariton is a superposition of a phonon and a photon.
• A plasmon is a collective excitation, which is the quantum of plasma oscillations (wherein all the electrons simultaneously oscillate with respect to all the ions).
• A polaron is a quasiparticle which comes about when an electron interacts with the polarization of its surrounding ions.
• An exciton is an electron and hole bound together.
• A plasmariton is a coupled optical phonon and dressed photon consisting of a plasmon and photon.
More specialized examples[edit]
• A roton is a collective excitation associated with the rotation of a fluid (often a superfluid). It is a quantum of a vortex.
• Composite fermions arise in a two-dimensional system subject to a large magnetic field, most famously those systems that exhibit the fractional quantum Hall effect.[9] These quasiparticles are quite unlike normal particles in two ways. First, their charge can be less than the electron charge e. In fact, they have been observed with charges of e/3, e/4, e/5, and e/7.[10] Second, they can be anyons, an exotic type of particle that is neither a fermion nor boson.[11]
• Stoner excitations in ferromagnetic metals
• Bogoliubov quasiparticles in superconductors. Superconductivity is carried by Cooper pairs—usually described as pairs of electrons—that move through the crystal lattice without resistance. A broken Cooper pair is called a Bogoliubov quasiparticle.[12] It differs from the conventional quasiparticle in metal because it combines the properties of a negatively charged electron and a positively charged hole (an electron void). Physical objects like impurity atoms, from which quasiparticles scatter in an ordinary metal, only weakly affect the energy of a Cooper pair in a conventional superconductor. In conventional superconductors, interference between Bogoliubov quasiparticles is tough for an STM to see. Because of their complex global electronic structures, however, high-Tc cuprate superconductors are another matter. Thus Davis and his colleagues were able to resolve distinctive patterns of quasiparticle interference in Bi-2212.[13]
• A Majorana fermion is a particle which equals its own antiparticle, and can emerge as a quasiparticle in certain superconductors, or in a quantum spin liquid.[14]
• Magnetic monopoles arise in condensed matter systems such as spin ice and carry an effective magnetic charge as well as being endowed with other typical quasiparticle properties such as an effective mass. They may be formed through spin flips in frustrated pyrochlore ferromagnets and interact through a Coulomb potential.
• Skyrmions
• Spinon is represented by quasiparticle produced as a result of electron spin-charge separation, and can form both quantum spin liquid and strongly correlated quantum spin liquid in some minerals like Herbertsmithite.[15]
• Angulons can be used to describe the rotation of molecules in solvents. First postulated theoretically in 2015,[16] the existence of the angulon was confirmed in February 2017, after a series of experiments spanning 20 years. Heavy and light species of molecules were found to rotate inside superfluid helium droplets, in good agreement with the angulon theory.[17][18]
• Type-II Weyl fermions break Lorentz symmetry, the foundation of the special theory of relativity, which cannot be broken by real particles.[19]
• A dislon is a quantized field associated with the quantization of the lattice displacement field of a crystal dislocation. It is a quantum of vibration and static strain field of a dislocation line.[20]
See also[edit]
1. ^ a b c d e f E. Kaxiras, Atomic and Electronic Structure of Solids, ISBN 0-521-52339-7, pages 65–69.
2. ^ Ashcroft and Mermin (1976). Solid State Physics (1st ed.). Holt, Reinhart, and Winston. pp. 299–302. ISBN 978-0030839931.
3. ^ a b c A guide to Feynman diagrams in the many-body problem, by Richard D. Mattuck, p10. "As we have seen, the quasiparticle consists of the original real, individual particle, plus a cloud of disturbed neighbors. It behaves very much like an individual particle, except that it has an effective mass and a lifetime. But there also exist other kinds of fictitious particles in many-body systems, i.e. 'collective excitations'. These do not center around individual particles, but instead involve collective, wavelike motion of all the particles in the system simultaneously."
4. ^ "Ultracold atoms permit direct observation of quasiparticle dynamics". Physics World. 18 March 2021. Retrieved 26 March 2021.
5. ^ Kozhevnikov, A. B. (2004). Stalin's great science : the times and adventures of Soviet physicists. London: Imperial College Press. ISBN 1-86094-601-1. OCLC 62416599.
6. ^ Ohtsu, Motoichi; Kobayashi, Kiyoshi; Kawazoe, Tadashi; Yatsui, Takashi; Naruse, Makoto (2008). Principles of Nanophotonics. CRC Press. p. 205. ISBN 9781584889731.
7. ^ Gelfert, Axel (2003). "Manipulative success and the unreal". International Studies in the Philosophy of Science. 17 (3): 245–263. CiteSeerX doi:10.1080/0269859032000169451.
8. ^ B. Falkenburg, Particle Metaphysics (The Frontiers Collection), Berlin: Springer 2007, esp. pp. 243–46
9. ^ "Physics Today Article".
10. ^ "Cosmos magazine June 2008". Archived from the original on 9 June 2008.
11. ^ Goldman, Vladimir J (2007). "Fractional quantum Hall effect: A game of five-halves". Nature Physics. 3 (8): 517. Bibcode:2007NatPh...3..517G. doi:10.1038/nphys681.
12. ^ "Josephson Junctions". Science and Technology Review. Lawrence Livermore National Laboratory.
13. ^ J. E. Hoffman; McElroy, K; Lee, DH; Lang, KM; Eisaki, H; Uchida, S; Davis, JC; et al. (2002). "Imaging Quasiparticle Interference in Bi2Sr2CaCu2O8+δ". Science. 297 (5584): 1148–51. arXiv:cond-mat/0209276. Bibcode:2002Sci...297.1148H. doi:10.1126/science.1072640. PMID 12142440.
14. ^ Banerjee, A.; Bridges, C. A.; Yan, J.-Q.; et al. (4 April 2016). "Proximate Kitaev quantum spin liquid behaviour in a honeycomb magnet". Nature Materials. 15 (7): 733–740. arXiv:1504.08037. Bibcode:2016NatMa..15..733B. doi:10.1038/nmat4604. PMID 27043779.
15. ^ Shaginyan, V. R.; et al. (2012). "Identification of Strongly Correlated Spin Liquid in Herbertsmithite". EPL. 97 (5): 56001. arXiv:1111.0179. Bibcode:2012EL.....9756001S. doi:10.1209/0295-5075/97/56001.
16. ^ Schmidt, Richard; Lemeshko, Mikhail (18 May 2015). "Rotation of Quantum Impurities in the Presence of a Many-Body Environment". Physical Review Letters. 114 (20): 203001. arXiv:1502.03447. Bibcode:2015PhRvL.114t3001S. doi:10.1103/PhysRevLett.114.203001. PMID 26047225.
17. ^ Lemeshko, Mikhail (27 February 2017). "Quasiparticle Approach to Molecules Interacting with Quantum Solvents". Physical Review Letters. 118 (9): 095301. arXiv:1610.01604. Bibcode:2017PhRvL.118i5301L. doi:10.1103/PhysRevLett.118.095301. PMID 28306270.
18. ^ "Existence of a new quasiparticle demonstrated". Phys.org. Retrieved 1 March 2017.
19. ^ Xu, S.Y.; Alidoust, N.; Chang, G.; et al. (2 June 2017). "Discovery of Lorentz-violating type II Weyl fermions in LaAlGe". Science Advances. 3 (6): e1603266. Bibcode:2017SciA....3E3266X. doi:10.1126/sciadv.1603266. PMC 5457030. PMID 28630919.
20. ^ Li, Mingda; Tsurimaki, Yoichiro; Meng, Qingping; Andrejevic, Nina; Zhu, Yimei; Mahan, Gerald D.; Chen, Gang (2018). "Theory of electron–phonon–dislon interacting system—toward a quantized theory of dislocations". New Journal of Physics. 20 (2): 023010. arXiv:1708.07143. doi:10.1088/1367-2630/aaa383.
Further reading[edit]
• L. D. Landau, Soviet Phys. JETP. 3:920 (1957)
• L. D. Landau, Soviet Phys. JETP. 5:101 (1957)
• A. A. Abrikosov, L. P. Gor'kov, and I. E. Dzyaloshinski, Methods of Quantum Field Theory in Statistical Physics (1963, 1975). Prentice-Hall, New Jersey; Dover Publications, New York.
• D. Pines, and P. Nozières, The Theory of Quantum Liquids (1966). W.A. Benjamin, New York. Volume I: Normal Fermi Liquids (1999). Westview Press, Boulder.
• J. W. Negele, and H. Orland, Quantum Many-Particle Systems (1998). Westview Press, Boulder
External links[edit] |
0f787ed764d01114 | The Born rule is obvious
Philip Ball has just published an excellent article in the Quanta magazine about two recent attempts at understanding the Born rule: one by Masanes, Galley, and Müller, where they derive the Born rule from operational assumptions1, and another by Cabello, that derives the set of quantum correlations from assumptions about ideal measurements.
I’m happy with how the article turned out (no bullshit, conveys complex concepts in understandable language, quotes me ;), but there is a point about it that I’d like to nitpick: Ball writes that it was not “immediately obvious” whether the probabilities should be given by $\psi$ or $\psi^2$. Well, it might not have been immediately obvious to Born, but this is just because he was not familiar with Schrödinger’s theory2. Schrödinger, on the other hand, was very familiar with his own theory, and in the very paper where he introduced the Schrödinger equation he discussed at length the meaning of the quantity $|\psi|^2$. He got it wrong, but my point here is that he knew that $|\psi|^2$ was the right quantity to look at. It was obvious to him because the Schrödinger evolution is unitary, and absolute values squared behave well under unitary evolution.
Born’s contribution was, therefore, not mathematical, but conceptual. What he introduced was not the $|\psi|^2$ formula, but the idea that this is a probability. And the difficulty we have with the Born rule until today is conceptual, not mathematical. Nobody doubts that the probability must be given by $|\psi|^2$, but people are still puzzled by these high-level, ill-defined concepts of probability and measurement in an otherwise reductionist theory. And I think one cannot hope to understand the Born rule without understanding what probability is.
Which is why I don’t think the papers of Masanes et al. and Cabello can explain the Born rule. They refuse to tackle the conceptual difficulties, and focus on the mathematical ones. What they can explain is why quantum theory immediately goes down in flames if we replace the Born rule with anything else. I don’t want to minimize this result: it is nontrivial, and solves something that was bothering me for a long time. I’ve always wanted to find a minimally reasonable alternative to the Born rule for my research, and now I know that there isn’t one.
This is what I like, by the way, in the works of Saunders, Deutsch, Wallace, Vaidman, Carroll, and Sebens. They tackle the conceptual difficulties with probability and measurement head on3. I’m not satisfied with their answers, for several reasons, but at least they are asking the right questions.
This entry was posted in Uncategorised. Bookmark the permalink.
4 Responses to The Born rule is obvious
1. gentzen says:
Nice to have a non-aligned and sceptical commentator like Araújo in the article.
Ty Rex is right, your contributions made the article more enjoyable. And this blog posts adds further clarity.
2. Curious says:
This is a genuine question out of ignorance. Do you think there is anything to the fact that Cabello derived the Hilbert space framework as the most general probability theory you can apply to idealised measurements?
He doesn’t derive the fact that it would be a complex Hilbert space, but it seems to show if you’re an agent sitting there doing measurements you should use the Born rule unless you find statistical properties in the system that allow you to assume the extra steps needed to narrow down to Kolmogorov probability, e.g. no measurements fundamentally disturb others etc.
3. Mateus Araújo says:
I’m not very impressed, to be honest. The assumption that the probabilities come from ideal measurements is quite strong – why should one assume a priori that measurements are repeatable, or that joint measurability implies non-disturbance? I think what it shows is that if you have convinced yourself that your measurements behave like this, then you should expect the correlations you produce to be the quantum ones.
Also, I wouldn’t use the expression “Kolmogorov probability”, as it is rather ill-defined. If you mean probabilities that don’t have any property other than positivity and normalisation, well, then your statement is false, because the set of quantum correlations is much more restricted than that.
4. Curious says:
Thanks for that. You’re right, I should have said Classical Probability Theory.
Leave a Reply
|
26cdb03cac715669 | Main Nuclear Spin Relaxation in Liquids: Theory, Experiments, and Applications
Nuclear Spin Relaxation in Liquids: Theory, Experiments, and Applications
0 / 0
How much do you like this book?
What’s the quality of the file?
Download the book for quality assessment
What’s the quality of the downloaded files?
Nuclear magnetic resonance (NMR) is widely used across many fields because of the rich data it produces, and some of the most valuable data come from the study of nuclear spin relaxation in solution. While described to varying degrees in all major NMR books, spin relaxation is often perceived as a difficult, if not obscure, topic, and an accessible, cohesive treatment has been nearly impossible to find. Collecting relaxation theory, experimental techniques, and illustrative applications into a single volume, this book clarifies the nature of the phenomenon, shows how to study it, and explains why such studies are worthwhile. Coverage ranges from basic to rigorous theory and from simple to sophisticated experimental methods, and the level of detail is somewhat greater than most other NMR texts. Topics include cross-relaxation, multispin phenomena, relaxation studies of molecular dynamics and structure, and special topics such as relaxation in systems with quadrupolar nuclei and paramagnetic systems. Avoiding overly demanding mathematics, the authors explain relaxation in a manner that anyone with a basic familiarity with NMR can follow, regardless of their specialty. The focus is on illustrating and explaining the physical nature of the phenomena, rather than the intricate details. Nuclear Spin Relaxation in Liquids: Theory, Experiments, and Applications forms useful supplementary reading for graduate students and a valuable desk reference for NMR spectroscopists, whether in chemistry, physics, chemical physics, or biochemistry.
Taylor & Francis
ISBN 10:
ISBN 13:
Series in Chemical Physics
PDF, 8.04 MB
Download (pdf, 8.04 MB)
You may be interested in Powered by Rec2Me
Most frequently terms
Number of Substitutions Omitting at Least one Letter in a Transitive Group
PDF, 280 KB
Nuclear Power Pollution and Politics
PDF, 1.63 MB
Nuclear Spin Relaxation in Liquids:
Theory, Experiments, and Applications
Series in Chemical Physics
The Series in Chemical Physics is an international series that meets the need for
up-to-date texts on theoretical and experimental aspects in this rapidly developing
field. Books in the series range in level from introductory monographs and
practical handbooks to more advanced expositions of current research. The books
are written at a level suitable for senior undergraduates and graduate students,
and will also be useful for practicing chemists, physicists, and chemical physicists
who wish to refresh their knowledge of a particular field.
Series Editors: J H Moore, University of Maryland, USA
N D Spencer, ETH-Zurich, Switzerland
series includes:
Nuclear Spin Relaxation in Liquids: Theory, Experiments, and Applications
Jozef Kowalewski and Lena Mäler
Fundamentals of Molecular Symmetry
Philip R Bunker and Per Jensen
Series in Chemical Physics
Nuclear Spin Relaxation in Liquids:
Theory, Experiments, and Applications
Jozef Kowalewski
Stockholm University, Sweden
Lena Mäler
Stockholm University, Sweden
New York London
Published in 2006 by
CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
© 2006 by Taylor & Francis Group, LLC
CRC Press is an imprint of Taylor & Francis Group
No claim to original U.S. Government works
Printed in the United States of America on acid-free paper
10 9 8 7 6 5 4 3 2 1
International Standard Book Number-10: 0-7503-0964-4 (Hardcover)
International Standard Book Number-13: 978-0-7503-0964-6 (Hardcover)
Library of Congress Card Number 2005054885
responsibility for the validity of all materials or for the; consequences of their use.
system of payment has been arranged.
for identification and explanation without intent to infringe.
Library of Congress Cataloging-in-Publication Data
Kowalewski, Jozef.
Nuclear spin relaxation in liquids : theory, experiments, and applications / by Jozef Kowalewski
and Lena Mäler.
p. cm. -- (Series in chemical physics ; 2)
Includes bibliographical references and index.
ISBN 0-7503-0964-4
1. Relaxation phenomena. 2. Nuclear spin. I. Mäler, Lena. II. Title. III. Series in chemical physics
(Taylor & Francis Group) ; 2.
QC173.4.R44K69 2006
Visit the Taylor & Francis Web site at
Taylor & Francis Group
is the Academic Division of Informa plc.
and the CRC Press Web site at
Nuclear magnetic resonance (NMR) is a powerful branch of spectroscopy,
with a wide range of applications within chemistry, physics, material science,
biology, and medicine. The importance of the technique has been recognized
by the Nobel prizes in chemistry awarded to Professor Richard Ernst in 1991
for methodological development of high-resolution NMR spectroscopy and
to Professor Kurt Wüthrich in 2002 for his development of NMR techniques
for structure determination of biological macromolecules in solution.
NMR employs a quantum-mechanical property called spin, the origin of
the characteristic magnetic properties of atomic nuclei, and its behavior in
an external magnetic field. NMR provides discriminating detail at the molecular level regarding the chemical environment of individual atomic nuclei.
Through-bond or through-space connections involving pairs of nuclei, rates
of certain chemical reactions, and water distribution and mobility in tissue
typify the information commonly available from the NMR experiment. Perhaps the richest information is obtained from studies of nuclear spin relaxation (NMR relaxation) in solution, which investigate how the transfer of
energy and order occur among nuclear spins and between the spin system
and its environment. Various aspects of NMR relaxation in liquids are the
topic of this book.
The relaxation phenomena are described in all major NMR books, with a
varying level of depth and detail. We found it worthwhile to collect in a
single volume the relaxation theory, experimental techniques, and illustrative applications. We go into more detail in these fields than is found in most
general NMR texts. The goal is to explain relaxation in a way that would be
possible for readers familiar with the basics of NMR to follow, such as
advanced undergraduate and graduate students in chemistry, biochemistry,
biophysics, and related fields.
Most of the book is not too demanding mathematically, with the objective
to illustrate and explain the physical nature of the phenomena rather than
their intricate details. Some of the theoretical chapters (Chapter 4 through
Chapter 6) go into more depth and contain more sophisticated mathematical
tools. These chapters are directed more to specialists and may be omitted at
first reading. Most of the key results obtained in these chapters are also given
in the experimental or application sections.
The outline of the book is as follows: After an introduction in Chapter 1,
Chapter 2 and Chapter 3 present a fairly simple version of relaxation theory,
including the description of dipolar relaxation in a two-spin system at the level
of Solomon equations; they also contain a sketchy presentation of the intermolecular relaxation. Chapter 4 and Chapter 5 provide a more sophisticated
and general version of the theory, as well as a discussion of other relaxation
mechanisms. Various theoretical tools are presented and applied to appropriate problems and examples.
Chapter 6 deals with spectral densities, the link connecting the relaxation
theory with the liquid dynamics. Chapter 7 is an introduction to experimental tools of NMR, with emphasis on relaxation-related experimental techniques for solution state studies. The following chapters cover experimental
methods for studying increasingly sophisticated phenomena of single-spin
relaxation (Chapter 8), cross-relaxation (Chapter 9), and more general multispin phenomena (Chapter 10). In each chapter, a progression of techniques
is described, starting with simple methods for simple molecules and moving
on to advanced tools for studies of biological macromolecules.
Chapter 11 and Chapter 12 are devoted to applications of relaxation studies
as a source of information on molecular dynamics and molecular structure.
The remaining four chapters deal with a variety of special topics:
• The effects of chemical (often conformational) exchange on relaxation phenomena and relaxation-related measurements (Chapter 13)
• Relaxation processes in systems containing quadrupolar (I ≥ 1) nuclei
(Chapter 14)
• Paramagnetic systems containing unpaired electron spin (Chapter 15)
• A brief presentation of NMR relaxation in other aggregation states
of matter (Chapter 16)
The book is not meant to be a comprehensive source of literature references
on relaxation. We are rather selective in our choice of references, concentrating on reviews and original papers with pedagogic qualities that have been
chosen at least partly according to our tastes. The references are collected
after each chapter. Some of the NMR (and related) textbooks useful in the
context of more than one chapter are collected in a special list provided
immediately after this preface.
Most of the figures have been prepared especially for this book; some are
reproduced with permission of the copyright owners, who are acknowledged for their generosity. In the process of writing this book, we have
obtained assistance from and support of several students and colleagues.
Among these, we wish in particular to mention Hans Adolfsson, August
Andersson, Peter Damberg, Jens Danielsson, Danuta Kruk, Arnold Maliniak,
Giacomo Parigi, and Dick Sandström. We are very grateful to Nina Kowalewska
and Hans Adolfsson for their endurance and support during the 2 years it
took to write this text.
Further Reading
Abragam, A. The Principles of Nuclear Magnetism (Oxford: Oxford University Press),
Atkins, P.W. and Friedman, R.S. Molecular Quantum Mechanics (Oxford: Oxford University Press), 1997.
Bakhmutov, V.I. Practical NMR Relaxation for Chemists (Chichester: Wiley), 2004.
Brink, D.M. and Satchler, G.R. Angular Momentum (Oxford: Clarendon Press), 1993.
Canet, D. Nuclear Magnetic Resonance: Concepts and Methods (Chichester: Wiley), 1996.
Cavanagh, J., Fairbrother, W.J., Palmer, A.G., and Skelton, N.J. Protein NMR Spectroscopy (San Diego: Academic Press), 1996.
Ernst, R.R., Bodenhausen, G., and Wokaun, A. Principles of Nuclear Magnetic Resonance
in One and Two Dimensions (Oxford: Clarendon Press), 1987.
Goldman, M. Quantum Description of High-Resolution NMR in Liquids (Oxford:
Clarendon Press), 1988.
Hennel, J.W. and Klinowski, J. Fundamentals of Nuclear Magnetic Resonance (Harlow:
Longman), 1993.
Levitt, M.H. Spin Dynamics (Chichester: Wiley), 2001.
McConnell, J. The Theory of Nuclear Magnetic Relaxation in Liquids (Cambridge: Cambridge University Press), 1987.
Neuhaus, D. and Williamson, M.P. The Nuclear Overhauser Effect in Structural and
Conformational Analysis (New York: VCH), 1989.
Slichter, C.P. Principles of Magnetic Resonance (Berlin: Springer), 1989.
Van Kampen, N.G. Stochastic Processes in Physics and Chemistry (Amsterdam: NorthHolland), 1981.
Wüthrich, K. NMR of Proteins and Nucleic Acids (New York: Wiley), 1986.
The Authors
Jozef Kowalewski received his Ph.D. from Stockholm University in 1975.
He began working with nuclear spin relaxation as a postdoctoral fellow at
Florida State University. He became a professor in physical chemistry at
Stockholm University in 1986 and chairman of the department of physical,
inorganic, and structural chemistry in 1993. Dr. Kowalewski was a member
of the chemistry committee of the Swedish Natural Science Research Council
in the late 1990s and is currently a member of the advisory editorial board
of Magnetic Resonance in Chemistry. He has supervised 15 Ph.D. students and
authored about 150 scientific papers.
Lena Mäler received her Ph.D. from Stockholm University in 1996, working
with Jozef Kowalewski in nuclear spin relaxation. She was a postdoctoral
fellow at the Scripps Research Institute and became an assistant professor
in the Department of Biophysics in 1999. Dr. Mäler is currently associate
professor at the Department of Biochemistry and Biophysics at Stockholm
Equilibrium and Nonequilibrium
Applications of Redfield Theory to Systems
Spectral Densities and Molecular Dynamics .......................... 127
Measuring T 1 and T 2 Relaxation Times.................................... 177
10 Cross-Correlation and Multiple-Quantum
12 Applications of Relaxation-Related Measurements
15 Nuclear Spin Relaxation in Paramagnetic
16 Nuclear Spin Relaxation in Other
Equilibrium and Nonequilibrium
States in NMR
1.1 Individual Spins: Elements of Quantum Mechanics ...............................2
1.4 Coupled (Interacting) Spins: the Product
The concepts of equilibrium and deviation from equilibrium have a central
role in physical chemistry. Once a system is in a nonequilibrium state, it will
tend to return to equilibrium in a process which does not occur instantaneously. This general phenomenon of development towards equilibrium is
called relaxation. In the specific context of nuclear magnetic resonance, the
equilibrium state is that of a macroscopic sample of nuclear spins in a
magnetic field. In order to talk about a development towards equilibrium,
we need to define and be able to create an initial state, which is different
from the equilibrium state.
Using experimental techniques of present day NMR spectroscopy — in
the first place, radiofrequency pulses — it is possible to create a large variety
of nonequilibrium states. Starting from a given nonequilibrium situation, the
spin systems evolve in a complicated way in which the “return to equilibrium” processes compete with related phenomena, converting one type of
nonequilibrium into another.
In NMR, the processes by which the return to equilibrium is achieved are
denoted “spin relaxation.” Relaxation experiments were among the earliest
NMR applications of modern Fourier transform NMR, and the development
of applications and experimental procedures has been enormous. Consequently, there is a vast literature on NMR relaxation regarding theoretical
considerations and advances in methodology. Relaxation is important
because it has effects on how NMR experiments are carried out, but perhaps
even more valuable is the information content derived from relaxation
Nuclear Spin Relaxation in Liquids: Theory, Experiments, and Applications
parameters. Information about the physical processes governing relaxation
can be obtained from experimental NMR parameters.
In order to set the stage for theoretical descriptions of spin relaxation, we
need to define the spin system and the means of manipulating the spin
systems to obtain nonequilibrium states. In this chapter, we provide elementary tools for description of spin systems and their equilibrium and nonequilibrium states.
Individual Spins: Elements of Quantum Mechanics
In order to arrive at a description of a large number of spins, we need to
start with a single spin. The properties of spin are obtained using quantum
mechanics. A brief account of the necessary quantum mechanical background is given in this section; see the further reading section in this book’s
preface (in particular, the books by Atkins and Friedman, Goldman, Levitt,
and Slichter) for a more complete presentation.
The basis of quantum mechanics can be formulated as a series of postulates
— fundamental statements assumed to be true without proof. The proof of
these postulates lies in the fact that they successfully predict outcome of
experiments. One field in which this works extremely well is nuclear magnetic resonance. The first postulate of quantum mechanics states that the
state of a physical system is described, as fully as it is possible, by a wave
function for the system. The wave function is a function of relevant coordinates and of time. In the case of spin, the wave functions are defined in terms
of a spin coordinate. The important property of the spin wave functions is
the fact that they can be formulated exactly in terms of linear combinations
of a finite number of functions with known properties.
The second postulate of quantum mechanics states that, for every measurable quantity (called observable), an associated operator exists. An operator, Q̂ ,
(we are going to denote operators, with certain exceptions specified later,
with a “hat”) is a mathematical object that operates on a function and creates
another function. In certain cases, the action of an operator on a function
yields the same function, multiplied by a constant. The functions with this
property are called the eigenfunctions of the operator and the corresponding
constants the eigenvalues.
In the context of spins, the most important operators are the spin angular
momentum operators. Every nuclear species is characterized by a nuclear spin
quantum number, I, which can attain integer or half-integer values. The
nuclear spin quantum number is related to the eigenvalue of the total spin
angular momentum operator, Î 2 , through:
Iˆ 2ψ = I ( I + 1)ψ
where ψ is an eigenfunction of the operator Î 2 .
Equilibrium and Nonequilibrium States in NMR
The third postulate of quantum mechanics states that if a system is
described by a wave function that is an eigenfunction of a quantum mechanical operator, then the result of every measurement of the corresponding
observable will be equal to that eigenvalue. Equation (1.1) assumes that the
spin operators are dimensionless, a convention followed in this book. A
natural unit for angular momentum in quantum mechanics is otherwise ,
the Planck constant divided by 2π ( = 1.05457⋅10–34 Js). The magnitude of the
angular momentum — which is obtained in every measurement — in these
units becomes I ( I + 1) . If I = 0, then the nucleus has no spin angular momentum and is not active in NMR.
The z-component of the spin angular momentum vector is another important operator. This operator is related to the second of two angular momentum
quantum numbers. Besides I, there is also m, which specifies the z-component
of the spin angular momentum vector:
Î zψ = m ψ
The quantum number m can attain values ranging between –I and I, in steps
of one. We can label the eigenfunctions ψ to the operators Î 2 and Îz with the
corresponding quantum numbers, ψI,m. The eigenfunctions are normalized, in
the sense that the integral over all space of the product of the eigenfunction
ψI,m with its complex conjugate, ψ I*,m , is equal to unity, ∫ ψ I*,mψ I ,mdσ = 1. σ is
a variable of integration in the spin space (the spin coordinate). Functions used
in quantum mechanics are often complex and the symbol ψ * denotes the
complex conjugate of the function ψ, i.e., a corresponding function in which
all the imaginary symbols i are replaced by –i (we note that i = −1 ).
The normalization requirement is a property of quantum mechanical wave
functions. The square of the absolute value of a wave function, ψ *ψ, is related
to the probability density of finding the system at a certain value of a relevant
coordinate. Consequently, the probability density must integrate to unity
over all space. If we integrate, on the other hand, a product of two eigenfunctions corresponding to different values of the quantum number m, the
integral vanishes; ∫ ψ I*,mψ I ,m 'dσ = 0 if m ≠ m′, and we say that the functions
are orthogonal. The set of functions fulfilling the requirements of normalization and orthogonality is called orthonormal. An alternative way of describing
these requirements is through the relation ∫ ψ I*,mψ I ,m′ dσ = δ mm′ , where the symbol
δmm′ is called the Kronecker delta and is equal to unity if m = m′ and to zero
It is often convenient to use the bra-ket (bracket) notation, where the
eigenfunctions are treated as unit vectors denoted by a “ket,” |I,m〉. In the
bra-ket notation, the normalization condition becomes 〈I,m|I,m〉 = 1, in which
the “bra,” 〈I,m|, is associated with ψ *I,m. The star on the first symbol in
〈I,m|I,m〉 is excluded because the bra symbol, 〈|, implies complex conjugation. The integration is replaced by the scalar product of a bra, 〈I,m|, and a
ket, |I,m〉.
Nuclear Spin Relaxation in Liquids: Theory, Experiments, and Applications
For simplicity, we concentrate at this stage on nuclei with the nuclear spin
quantum number I = 1/2 (which, in fact, we shall work with throughout
most of this book). In this case, there are two possible eigenvalues m of the
z-component of the spin: −1/2 and +1/2. The spin eigenfunctions corresponding to these eigenvalues are denoted β and α, respectively. Using the
bra-ket formalism, we will use the notation |1/2,–1/2〉 = |β 〉 and |1/2,1/2〉 =
|α〉. The orthogonality of the eigenfunctions can be formulated very compactly in the bracket notation: 〈α|β 〉 = 0.
If we represent functions as vectors, an operator transforms one vector
into another. A set of orthogonal vectors defines a vector space, denoted as
Hilbert space, and an operator performs a transformation of one vector in
this space into another. Such an operation can be represented by a matrix in
the Hilbert space. The elements of such a matrix, called matrix elements of the
operator Q̂ , are defined as:
Qij = i Qˆ j
The symbols i and j can refer to different eigenstates of the quantum
system or to other vectors in the Hilbert space. Qˆ|j〉 is the ket obtained as
a result of Q̂ operating on |j〉 and Qij can be considered as a scalar product
of the bra 〈i|and that ket. For example, the matrix representation of the
z-component of the spin angular momentum in the space defined by the
eigenvectors |α〉 and |β 〉 is:
1 0
Î z = 2
0 − 2
The matrix is diagonal as a result of our choice of the eigenvectors of the
operator Î z as the basis set. We can also obtain matrix representations for the
x- and y-components of spin in the basis of |α〉 and |β 〉:
0 1
Î x = 1 2
2 0
0 − i
Î y = i
2 0
The quantum mechanical operators corresponding to physically measurable quantities are hermitian, i.e., their matrix elements conform to the relation 〈i|Qˆ| j 〉 = 〈 j|Qˆ|i 〉 * (note how this functions for the Î y!). The matrix
representations of I x , Î y, and Î z in Equations (1.4) are called the Pauli spin
Equilibrium and Nonequilibrium States in NMR
In quantum mechanics, one often needs to operate on a function (vector)
with two operators after each other. In this case, we can formally speak about
a product of two operators. The order in which the operators act may be
important: we say that the multiplication of operators is in general noncommutative. From this, we can introduce the concept of a commutator of two
operators, Q̂ and P̂ :
ˆ ˆ − PQ
Qˆ , Pˆ = QP
For certain pairs of operators, the commutator vanishes and we say that the
operators Q̂ and P̂ commute. The components of spin, Î x , Î y and Î z , do not commute with each other, but each of them commutes with Î 2 . Important theorems about commuting operators state that they have a common set of
eigenfunctions and that the corresponding observables can simultaneously have
precisely defined values. The proofs of these theorems can be found in the book
by Atkins and Friedman (further reading).
The spin angular momentum is simply related to the nuclear magnetic
moment. In terms of vector components, we can write, for example:
µz = γI Iz
µˆ z = γ I Iˆz
where µ z and Iz denote measurable quantities and the quantities with “hats”
represent the corresponding quantum mechanical operators. The quantity γI
is called the magnetogyric ratio (the notation gyromagnetic ratio is also used).
Magnetogyric ratios for some common nuclear species are summarized in
Table 1.1, together with the corresponding quantum numbers and natural
Properties of Some Common Nuclear Spin Species
ratio, γI/
(107 rad T–1s–1)
Larmor frequency
at 9.4 T, |ωI|/
(108 rads–1)
Q/(10–31 m2)
Nuclear Spin Relaxation in Liquids: Theory, Experiments, and Applications
When a nuclear spin is placed in a magnetic field, B, with the magnitude
B0, the field interacts with the magnetic moment (the Zeeman interaction). The
symbols B and B0 are, strictly speaking, the magnetic induction vector or its
magnitude, but we are going to refer to these terms as magnetic field, in
agreement with a common practice. The magnetic field in NMR is always
assumed to define the laboratory-frame z-axis. In classical physics, the energy
of the interaction is written as minus the scalar product of the magnetic
moment vector and the magnetic field vector, E = – ⋅ B = –µzB0. In quantum
mechanics, the Zeeman interaction is described by the Zeeman Hamiltonian:
ˆ = −γ B Iˆ
I 0 z
The Hamiltonian — or the total energy operator — is a very important operator in quantum mechanics. The eigenfunctions (or eigenvectors) of a quantum
system with a general Hamiltonian, Ĥ , fulfill the time-independent Schrödinger
ˆ|j 〉 = E |j 〉, in the bra-ket notation. For I = 1/2, the
equation, Ĥψ j = Ejψ j or H
Zeeman Hamiltonian has two eigenvalues, E1 2 = − 21 γ I B0 and E−1 2 = 21 γ I B0 ,
corresponding to the eigenvectors |α〉 and| β〉, respectively. The Hamiltonian
and its eigenvalues are expressed in angular frequency units, a convention
followed throughout this book.
The eigenvalues can easily be converted into “real” energy units by multiplying with . If the magnetogyric ratio is positive (which is the case, for
example, for 1H, and 13C nuclei), the α state corresponds to the lowest energy.
The difference between the two eigenvalues, ω0 = γIB0, is called the Larmor
frequency. (We will use the symbol Ej for energy eigenvalues in angular
frequency units and the symbol ω with appropriate index or indices for the
energy differences in the same units.)
Quantum mechanics does not require the system to be in a specific eigenstate
of the Hamiltonian. A spin-1/2 system can also exist in a superposition state,
i.e., a superposition of the two states |α〉 and |β 〉. The superposition state is
described by a wave function:
ψ = cα α + cβ β
which is subject to the normalization condition, |cα |2 + |cβ | 2 = 1. The coefficients cα and cβ are complex numbers and the squares of the absolute values
of the coefficients provide the weights (probabilities) of the two eigenstates
in the superposition state.
In NMR, the time evolution of quantum systems is of primary interest and
the time-dependent form of the Schrödinger equation is very important. Another
postulate of quantum mechanics states that the time evolution of the wave
function for a quantum system is given by the time-dependent Schrödinger
∂ψ (t)
ˆ ψ (t)
= −iH
Equilibrium and Nonequilibrium States in NMR
which explains the central role of the Hamilton operator or the Hamiltonian.
Note that, following the convention that the Hamiltonian is in angular frequency units, the symbol that is usually present on the left-hand side of
the time-dependent Schrödinger equation is dropped.
If the system is initially in one of the eigenstates and the Hamiltonian is
independent of time, as is the case for the Zeeman Hamiltonian of
Equation (1.7), then the solutions to Equation (1.9) are simply related to
the eigenfunctions and eigenvalues of the time-independent Schrödinger
ψ (t) = ψ j exp(−iEjt)
Thus, the system described originally by an eigenstate remains in that
eigenstate. The eigenstates are therefore also denoted as stationary solutions
to the time-dependent Schrödinger equation or stationary states. The complex
exponential factor is called the phase factor (a complex exponential can be
expressed in terms of cosine and sine functions, exp(iα) = cosα + i sinα). A
more general time-dependent solution can be expressed as a linear combination of functions given in Equation (1.10):
ψ (t) =
∑c ψ
exp(−iEmt) I , m
I ,m
m=− I
or, using the bracket notation,
ψ (t) =
m=− I
Another important quantity required for the discussion of quantum systems is the expectation value. Through still another of the postulates of quantum mechanics, the expectation value of an operator corresponding to an
observable is equal to the average value of a large number of measurements
of the observable under consideration for a system not in the eigenstate of
that operator. Consider, for example, the x-component of the nuclear magnetic moment; the expectation value of this quantity at time t is defined by:
µˆ x (t) = ψ *(t)µˆ xψ (t)dσ
µˆ x (t) = ψ (t) µˆ x ψ (t)
Nuclear Spin Relaxation in Liquids: Theory, Experiments, and Applications
We note that the expectation value of the x-component of the magnetic
moment, 〈 µˆ x 〉(t), is explicitly time dependent, while the operator µ̂ x is not.
Thus, the time dependence of the expectation value originates from the time
dependence of the wave functions. This way of expressing the time dependence is denoted the Schrödinger representation.
Using the definitions of Equation (1.11a) and Equation (1.11b), as well as
the relation µˆ x = γ I Iˆx , we can evaluate Equation (1.12b):
µˆ x (t) = γ I
∑ ∑c
m′ m
I , m′ Iˆx I , m exp i(Em′ − Em )t
From elementary properties of angular momentum (see, for example,
Atkins and Friedman (further reading)), we know that expressions for timeindependent matrix elements 〈 I , m′|Iˆx|I , m〉 = ∫ ψ I*,m′ Iˆxψ I ,mdσ vanish, unless
m′ = m ± 1. For the case of I = 1/2, we thus have nonvanishing elements for
m′ = 1/2 if m = −1/2 or vice versa, corresponding to the matrix representation
of Î x in Equation (1.4b). When this condition is fulfilled, we can see that the
argument of the exponential function becomes ±iω0t, where ω0 = –γIB0 is the
Larmor frequency. Noting both these relations, one can demonstrate (see
Slichter’s book (further reading) or do it as an exercise) that the expectation
value of µ̂ x oscillates with time at the Larmor frequency.
In the same way, one can show that the expectation value of µˆ = γ Iˆ
I y
also oscillates at the Larmor frequency, and the expectation value of the
z-component is constant with time. A detailed analysis (see Slichter’s book)
shows that the quantum mechanical expectation value of the nuclear magnetic moment vector operator ˆ behaves in analogy with a vector of length
γI/2 moving on a conical surface around the z-axis — the direction of the
magnetic field — in analogy to the classical Larmor precession.
Another important result is that the angle between the conical surface
and the magnetic field can have an arbitrary value and that the timeindependent z-component of the magnetic moment of a nuclear spin in
a superposition state can hold any arbitrary value between –γI/2 and + γI/2.
We emphasize that this result is consistent with the notion that the spins
can exist in superposition states and are not confined to pointing parallel
or antiparallel to the external field. In the same vein, we cannot associate
an individual spin in a superposition state with any of the two energy
eigenstates (energy levels) of the Hamiltonian, even though the motion
is clearly related to the energy difference between the Zeeman states.
Ensembles of Spins: the Density Operator
So far, we have only considered the properties of one single spin. The concepts of relaxation and equilibrium are closely connected to the behavior of
macroscopic samples of spins. A theoretical tool we need to use is that of an
Equilibrium and Nonequilibrium States in NMR
ensemble of spins — a large collection of identical and independent systems.
For simplicity, we deal here with an ensemble of spin-1/2 particles, interacting with the magnetic field through the Zeeman interaction but not interacting with each other.
The density operator method is an elegant way to deal with a very large
number (on the order of 1021) of quantum systems, corresponding to a macroscopic sample. We present here a very brief summary of the technique and
recommend the books in the further reading section (especially the books
by Abragam, Ernst et al., Goldman, Levitt, and Slichter) for more comprehensive presentations. The density operator approach allows us to calculate
expectation values of operators for ensembles of quantum systems rather than
for individual systems. Let us assume that a certain individual spin is
described by a superposition state wave function, according to Equation (1.8).
We disregard for the moment the time dependence. The expectation value of
an operator Q̂ is for that spin given by:
Qˆ =
∑ ∑c
m′ m
m=− I
I , m′ Qˆ I , m
m′=− I
= cα* cα α Qˆ α + cβ* cα β Qˆ α + cα* cβ α Qˆ β + cβ* cβ β Qˆ β
in analogy with the case of the x-component of the magnetic moment, Equation (1.12b).
If we wish to make a similar calculation for another spin, the coefficients
cα and cβ will be different but the matrix elements of the operator Q̂ will be
the same. Averaging over all spins in the ensemble, we obtain:
∑ ∑c
m′ m
m′ Qˆ m =
∑ ∑ m ρˆ m′
∑ ∑ρ
m′ Qˆ m
m′ Qˆ m = Tr( ρˆ Qˆ )
The bar over the products of coefficients denotes ensemble average. Here,
we have introduced the symbol ρ̂ , which denotes the density operator, with
the matrix representation 〈m|ρˆ |m′〉 = ρm ,m′ = cm* ′ cm . Tr( ρˆ Qˆ ) represents taking
the trace (summing the diagonal elements in a matrix representation) of the
product of the two operators or the two matrices.
The preceding definition shows that the density operator is hermitian. If
the density operator for an ensemble is known, then the expectation value
of any operator corresponding to observable quantities can be computed.
We may wish to calculate the expectation value as a function of time. This
can be done by expressing the superposition states in terms of time-dependent
Nuclear Spin Relaxation in Liquids: Theory, Experiments, and Applications
coefficients (and absorbing the phase factors exp(–iEt) into them). More
practically, we can calculate the time dependence of the density operator
directly, using the Liouville–von Neumann equation:
d ˆ
ρ(t) = −i Hˆ (t), ρˆ (t) = i ρˆ (t), Hˆ (t)
where the concept of a commutator of the two operators, introduced in
Equation (1.5), is used.
The Liouville–von Neumann equation can be derived in a straightforward
way from the time-dependent Schrödinger equation. Using time-dependent
density operator ρˆ (t) , the time-dependent form of the expectation value of
Equation (1.15) can be written as:
Qˆ (t) =
∑ ∑c
(t)cm (t) m′ Qˆ m =
∑ ∑ρ
(t) m′ Qˆ m
= Tr( ρˆ (t)Qˆ )
The matrix representation of the density operator is called the density matrix.
If we assume that the time dependence resides in the density matrix rather
than in the operator, the formulation of the time-dependent expectation values
in Equation (1.17) is denoted as the Schrödinger representation, in analogy
with the single-spin case of Equations (1.12). The elements of the density
matrix have a straightforward physical interpretation. The diagonal elements,
ρmm, represent the probabilities that a spin is in the eigenstate specified by the
quantum number m, or the relative population of that state. At the thermal
equilibrium, these populations are given by the Boltzmann distribution:
ρmm =
exp(− Em/kBT )
∑ exp(−E /k T)
T is the absolute temperature and kB is the Boltzmann constant, kB = 1.38066⋅10–23
JK–1. Nuclear spins are quantum objects and their distribution among various
quantum states should in principle be obtained using the Fermi–Dirac statistics
(for half-integer spins) or by Bose–Einstein statistics (for integer spins) rather
than from Equation (1.18). However, for anything but extremely low temperatures, the Boltzmann statistics is an excellent approximation.
The energy differences involved in NMR are tiny, which results in very
small population differences. For an ensemble of N spin-1/2 particles with
a positive magnetogyric ratio, we can write:
nαeq/N = ραα
1
1
1
exp 1 γ I B0/kBT ≈ 1 + 1 (γ I B0/kBT ) = 1 + bI
2
2
2
Equilibrium and Nonequilibrium States in NMR
nβeq/N = ρββ
1
1
1
1
exp − γ I B0/kBT ≈ 1 − (γ I B0/kBT ) = 1 − bI
2
2
2
2
where we expand the exponential in a power series, retain only the linear
term and introduce the quantity bI = γ I B0/kBT , called the Boltzmann factor.
N is the total number of spins.
Retaining only the linear term is valid as long as γ I B0 << kBT , a condition
easily fulfilled for anything but extremely low temperatures. The approximation of retaining only the linear term is called the high temperature approximation. For protons, which have the magnetogyric ratio γH = 26.7522⋅107
T–1s–1, at 300K and in the magnetic field of 9.4 T (corresponding to the 1H
frequency of 400 MHz), we obtain nαeq/N = 0.500016 and nβeq/N = 0.499984.
At 4K, the liquid helium temperature, the corresponding numbers are nαeq/
N = 0.5024 and nβeq/N = 0.4976.
Clearly, the natural Boltzmann polarization of the two nuclear spin levels is
very low — the origin of the poor sensitivity of NMR — compared to other
spectroscopic techniques. The population difference between the spin energy
levels determines the expectation value of the z-component of the ensembleaveraged nuclear magnetic moment, which we call the magnetization vector, of
an ensemble of spins. This can be seen (the reader is recommended to prove it
as an exercise) by inspecting Equation (1.15) and recognizing the fact that the
matrix representation of µ̂ z or Î z (see Equation 1.4a) has only diagonal elements.
The off-diagonal elements of the density matrix are called coherences. The
coefficients cm can be written as a product of an amplitude |cm| and a phase
factor exp(iam). The coherence between the eigenstates m and m′ is given by:
cm* ′ cm = cm* cm′ exp(i( am − am′ ))
The coherences are closely related to the magnetization components perpendicular to the magnetic field (the transverse magnetization). The phase
factor exp(i(am – am′)) for an individual spin specifies the direction of the
transverse magnetic moment. For a large number of spins at thermal equilibrium, there is no physical reason to assume any of the directions perpendicular to the field to be more probable than any other, which amounts to a
random distribution of the phase factors and vanishing coherences.
Simple NMR: the Magnetization Vector
According to the previous section, an ensemble of noninteracting nuclear
spins at the thermal equilibrium can be represented by a magnetization
vector, M, oriented along the direction of the external magnetic field. As we
Nuclear Spin Relaxation in Liquids: Theory, Experiments, and Applications
shall see later, the spins do interact with each other, but the interactions are
usually very weak compared to the Zeeman energies. In addition, the interactions tend to average to zero because of molecular motions in isotropic
For all practical purposes, the spins in isotropic liquids can be considered
as noninteracting if their NMR spectra do not show spin–spin splittings
( J couplings). The magnitude of the magnetization vector at thermal equilibrium, M0, is proportional to the Boltzmann factor and depends, in addition, on
the magnetic moment of an individual spin and on the number of spins:
M0 =
Nγ I2 2 I ( I + 1)B0
3 k BT
The magnetization vector is a macroscopic quantity and its motion can be
described using classical physics. Classically, if a magnetic moment is not
aligned along the magnetic field, it will precess around the field direction, the
same motion that we found earlier quantum mechanically for the magnetic
moment of an individual spin and which we refer to as Larmor precession.
The concept of a magnetization vector is very useful for describing NMR
experiments in systems of noninteracting spins. We can use it, for example,
to describe the effect of radiofrequency pulses. To describe an NMR experiment, we need to consider the presence of a static magnetic field in the zdirection with the magnitude B0, as well as the time-dependent magnetic
field corresponding to the magnetic component of electromagnetic radiation
(the radiofrequency field), B1.
The role of the radiofrequency field is simplest to consider in a frame
rotating around the B0 field with the angular velocity corresponding to the
radiofrequency. The reader can find the derivation and discussion of the
rotating frame in several books, e.g., Slichter or Levitt (further reading). For
our purposes, it is sufficient to say that if a radiofrequency field is applied
near resonance (meaning that the radiofrequency is close to the Larmor
frequency), the nuclear spins move as if it were the only field present — i.e.,
they precess around it with the angular velocity γIB1.
In this rotating frame, the applied radiofrequency field appears to be static.
If we keep the radiofrequency field switched on for a time τ, the magnetization will rotate by the angle γIB1τ. Adjusting the magnitude of the radiofrequency field, and/or the time during which we apply it in such a way
that the angle becomes π/2, we obtain a 90° pulse or a π/2 pulse, which has
the effect of turning the magnetization, originally in the z-direction to the
transverse plane, according to Figure 1.1. Setting the condition γIB1τ = π
results in a 180° pulse or π pulse.
Based on classical equations of motion for the dynamics of the magnetization vector, the transverse magnetization in the rotating frame will,
after a 90° pulse on resonance, retain its direction and magnitude. If the
radiofrequency pulse is not exactly on resonance, i.e., if there is a frequency
offset, ωoff, between the applied radiofrequency and the Larmor frequency,
Equilibrium and Nonequilibrium States in NMR
Illustration of the effect of radiofrequency pulses, here a 90° pulse.
the transverse magnetization after a 90° pulse is expected to precess in the
rotating frame with the frequency corresponding to the offset. Thus, the
rotating frame serves to make the frequency offset the only precession
Due to chemical shifts, the offset will vary for chemically inequivalent
nuclear spins of the same species, e.g., for inequivalent protons. The motion
of the magnetization vector generates an oscillating signal in the detector of
the NMR spectrometer, the free induction decay or FID. The FID in this simple
picture is expected to last forever, which we know is not in agreement with
experimental facts. The origin of the decay of the FID is nuclear spin relaxation.
The description of even the very simplest NMR experiment, the detection
of an FID after a 90° pulse, requires that we take into consideration two types
of motion: the coherent motion or precession around the effective fields and
the incoherent motion or relaxation. A very simple model containing these
two elements is described by the Bloch equations, called so after one of the
inventors of NMR, who proposed them in a seminal paper from 1946.1 The
Bloch equations can be written in the following, slightly simplified, form:
dM z M0 − M z
= M yω off − x
dM y
= − Mxω off −
Bloch equations are phenomenological, i.e., they aim at a simple description of the observed NMR phenomenon, without requirements of a strict
Nuclear Spin Relaxation in Liquids: Theory, Experiments, and Applications
derivation. We will show later that the relaxation behavior in the form of
the Bloch equations can be derived in some situations and more complicated
relaxation phenomena are predicted in other cases.
Equation (1.22a) describes the time variation of the longitudinal (along the
external field) component of the magnetization vector. The equation predicts
the magnetization component along B0 to relax exponentially to its equilibrium value, M0. The time constant for that process is called the spin-lattice
or longitudinal relaxation time and is denoted T1. The rate of exponential
recovery of Mz to equilibrium is given by the inverse of T1, sometimes
denoted R1, and called spin-lattice relaxation rate (the notation rate constant
rather than rate would be more logical but is rarely used). We will use T1
and R1 to describe longitudinal relaxation throughout this book.
The solution of the Bloch equation for Mz, after the initial inversion of the
magnetization (corresponding to the application of a 180° pulse at the time
t = 0) can be written as:
Mz (t) = M0 (1 − 2 exp(−t/T1 ))
The process of recovery of Mz is illustrated in Figure 1.2. The reader is
encouraged to prove that Equation (1.23) is a solution to Equation (1.22a).
Equation (1.22b) and Equation (1.22c) describe the motion of the transverse
components of the magnetization vector. The first part of the expressions
corresponds to the coherent motion of M in the rotating frame. The second
part introduces the concept of the transverse, or spin–spin relaxation time, T2,
describing the exponential decay of the xy-magnetization to its equilibrium
value of zero.
One interesting observation we can make in Equation (1.22b) and Equation
(1.22c) is related to the units. Clearly, the factors multiplied by the magnetization components in the two terms should have the same dimensions.
Because the natural unit for the angular frequency is radians per second, the
relaxation rate, or the inverse relaxation time, R2 = 1/T2, should indeed also
Illustration of the recovery of the Mz magnetization after the 180° pulse in the inversion-recovery
Mx,y (arbitrary unit)
Equilibrium and Nonequilibrium States in NMR
The decay of on-resonance transverse magnetization as a function of time.
be expressed in these units. Usually, relaxation times are given in seconds (the
rates are given in s–1), which tacitly implies that radians can be omitted; we
note in parenthesis that the radian is considered a dimensionless unit in physics. Assuming that the pulse is on-resonance, the evolution of the transverse
components of the magnetization vector after a 90° pulse is expressed as:
Mx , y = M0 exp(−t/T2 )
The decay of the transverse magnetization for the on-resonance situation
is illustrated in Figure 1.3. We recall that the NMR spectrum for a system of
noninteracting spins is the Fourier transform of the transverse magnetization
(see for example, Ernst et al. or Levitt, further reading). The Fourier transform
of an exponential decay is a Lorentzian centered at zero frequency, with the
full width at half-height (in Hertz) equal to ∆v = 1/πT2 (compare Figure 1.4).
The reason for introducing two different relaxation times is that the return
to the equilibrium is a physically different process for the longitudinal and
transverse magnetization components. The longitudinal relaxation changes the
energy of the spin system and thus involves the energy exchange between the
spins and the other degrees of freedom in the surrounding matter (the notion
of spin-lattice relaxation originates from early NMR work on solids, in which
Relationship between the Lorentzian lineshape and the transverse relaxation time constant, T2.
Nuclear Spin Relaxation in Liquids: Theory, Experiments, and Applications
these other degrees of freedom were those of the crystal lattice). The transverse
relaxation involves, on the other hand, the loss of phase coherence in the motion
of individual spins.
Coupled (Interacting) Spins: the Product Operator
The physical description of a spin system using magnetization vectors is
useful for an ensemble of noninteracting spins. For the case of coupled spins,
more refined tools are necessary to describe NMR experiments and NMR
relaxation processes. These tools have their basis in the concept of the density
operator. In Section 1.2, we introduced the density operator by means of its
matrix representation, the density matrix. For ensembles of interacting spins,
working with the matrix representation of the density operator quickly
becomes quite difficult to handle; therefore, it is often useful to work directly
with the expansion of the density operator in another type of vector space,
called the Liouville space. The concept of the Liouville space is discussed in the
book by Ernst et al. (further reading), among others, and in a review by Jeener.2
A basis set in the Liouville space can be formulated in terms of the basis
set in the Hilbert space in the following way. Consider the case of our isolated
spin 1/2 nucleus, with its basis set of |α〉 and |β〉. As we discussed earlier,
the scalar products of a bra and a ket, such as 〈α | β〉, are numbers. We can
also define outer products of these vectors, in which we have a ket to the left
and a bra to the right. Four such constructs are possible:
α α , α β , β α , β β
These objects are operators, which can be illustrated by the following example:
α β β = α 1= α
The operator |α〉 〈β | acts on the ket |β 〉. 〈β | β 〉 is a number (unity) and we
thus obtain that the result of the operation is another ket, |α〉. The operators
created as outer products of the basis vectors in the Hilbert space can be used
to form an operator basis set in the Liouville space. If the dimensionality of
the Hilbert space is n (n = 2 in our example of an isolated spin 1/2, with |α〉
and |β 〉 as the basis vectors), then the dimensionality of the corresponding
Liouville space is n2 (which in our case is four, corresponding to the four basis
vectors |α〉 〈α |, | α〉 〈β |, | β 〉 〈α |, |β 〉 〈β |). The operators expressed as outer products
will be an exception from the rule of decorating the operators with “hats.”
It is easy to obtain the matrix representations of the four preceding operators in the Hilbert space. For example, the matrix representation of the |α〉
〈α operator is 10 00 . The reader is advised to confirm this and to derive the
matrices corresponding to the other three operators. Comparing the matrix
Equilibrium and Nonequilibrium States in NMR
representations of the operators of Equation (1.25) with the Pauli spin matrices introduced in Equations (1.4), we can note that:
α + β β = 21 1ˆ op
α − β β = Î z
β + β α = Î x
α − α β = Iˆy
where we have introduced the unit operator, 1̂ op, with the property that it
leaves whatever function or vector coming after it unchanged. The unit operator is represented by a unit matrix, a matrix with ones on the diagonal and
zeroes elsewhere. Equation (1.27a) through Equation (1.27d) imply an orthogonal transformation of one set of vectors in the Liouville space into another.
Once we have defined an appropriate Liouville space basis, we can expand
any other operator in that space. For example, the density operator at thermal
equilibrium for an ensemble of isolated spins can be expressed as:
ρˆ eq = 1ˆ op + bI Iˆz
where the Boltzmann factor, bI, was defined together with Equation (1.19b).
The idea of expanding the density operator into an operator basis set
related to the spin operators is easily generalized to more complicated spin
systems. For a system of two spins, I and S, with the components, Î x , Î y , Î z
and Ŝx , Ŝy , Ŝz , we can form an appropriate basis set by including the unit
operator for each spin. The product operator basis for a two-spin system will thus
consist of 4 × 4 operators, given by the product of one operator for the spin I
and one operator for spin S, with suitable normalization. This product operator
basis will consist of 21 1̂op , Î x , Î y , Î z , Ŝx , Ŝy , Ŝz , 2 IˆxSˆ x , 2 IˆxSˆ y , 2 IˆxSˆ z , 2 Iˆy Sˆ x ,
2 Iˆy Sˆ y , 2 Iˆy Sˆ z , 2 Iˆz Sˆ x , 2 Iˆz Sˆ y , 2 Iˆz Sˆ z , a total of 16 operators.
A two-spin system is characterized by the occurrence of four eigenstates
of the Zeeman Hamiltonian, |αα〉, |αβ 〉, |βα〉, |ββ 〉, implying the dimensionality of four for the Hilbert space and the dimensionality of 42 = 16 for the
Liouville space. Clearly, we retain the dimensionality of the Liouville space
moving between the basis set consisting of ket-bra products |αα〉 〈αα |, etc.
and the product operator basis. Contrary to the density matrix calculations, the
product operators provide a method to easily describe NMR experiments and
are invaluable for evaluating NMR pulse sequences on interacting spins. The
Nuclear Spin Relaxation in Liquids: Theory, Experiments, and Applications
product operator formalism has in particular been used for improving and
designing new experiments. We will explore this further in the chapters
describing experimental techniques and applications.
In the same way in which one can define operators in the Hilbert space,
it is possible to construct their analogues in the Liouville space. These are
called superoperators. The superoperator analogue of the Hamiltonian is
called Liouville superoperator or Liouvillian, Lˆˆ (we will use the “double hat”
symbol for superoperators). It is defined as commutator with the Hamiltoˆ , ] . Operating with the Liouvillian on another operator amounts
nian, Lˆˆ = [ H
thus to taking the commutator of the Hamiltonian with that operator. This
superoperator formalism can be used to rewrite the Liouville–von Neumann
equation (Equation 1.16) in the form:
d ˆ
ρ = −i[ Hˆ , ρˆ ] = −iLˆ ρˆ
Equation (1.29) illustrates one of the reasons to use the superoperator
formalism: it allows very compact notation. Another important superoperator is the relaxation superoperator, which operates on the density operator
and describes its evolution towards equilibrium. In the same way in which
operators are represented in the Hilbert space by matrices, the superoperators have matrix representations in the Liouville space denoted as supermatrices. The concept of relaxation supermatrix is very important in
relaxation theory and we will come back to it in Chapter 4. Yet another
type of superoperators are various rotation superoperators introduced in
Chapter 7. In some situations, we need to consider more complicated
evolution of the density operator and the Liouville space formalism is then
very useful; we will see examples of that application of the formalism in
Chapter 10 and Chapter 15.
1. Bloch, F. Nuclear induction, Phys. Rev., 70, 460–474, 1946.
2. Jeener, J. Superoperators in magnetic resonance, Adv. Magn. Reson., 10, 1–51,
Simple Relaxation Theory
2.1 An Introductory Example: Spin-Lattice Relaxation...............................19
2.2 Elements of Statistics and Theory of Random Processes .....................22
2.3 Time-Dependent Perturbation Theory and Transition
2.5 Finite-Temperature Corrections to the Simple Model...........................37
In order to establish a foundation for relaxation theory, we will in this chapter
provide a background by introducing the concepts of spin-lattice and
spin–spin relaxation. We begin by looking at a simple example, which captures the basic principles of the nuclear spin relaxation without being directly
applicable to any real physical situation. We use this simple example to
introduce the important basic concepts within the theory of random processes
and then to calculate transition probabilities driven by random processes.
We look briefly at the prediction of the simple model and introduce the
important finite temperature corrections to the theory.
An Introductory Example: Spin-Lattice Relaxation
Among the two types of relaxation processes, the spin-lattice relaxation is
easier to explain in simple conceptual terms, and therefore we will start with
describing this process. Let us consider a system of I = 1/2 spins with a positive
magnetogyric ratio in a magnetic field B0. We deal thus with a two-level system
with populations nα and nβ , with the energy spacing ∆E = – ω0 = γ B0 and
with the Larmor frequency ω 0. At thermal equilibrium, the relative populations of the two levels are given by Equation (1.19a) and Equation (1.19b).
Nuclear Spin Relaxation in Liquids: Theory, Experiments, and Applications
Let us assume that the spin system is not in equilibrium. The nonequilibrium
situation can be produced in different ways. One way is to change the magnetic
field quickly. The equilibrium population distribution at the original field does
not correspond to the equilibrium at the new field. An everyday example of
such an experiment is putting an NMR sample into the NMR magnet. A better
controlled variety of such an experiment is the field-cycling experiment, in
which the magnet current is switched rapidly and to which we will return in
Chapter 8. An important feature of an experiment of this kind is the fact that
the Hamiltonian — and thus the Zeeman splitting — changes immediately,
while the density operator — for example, populations — requires some time
to adjust. This is the simplest example of spin-lattice relaxation. Another way
of creating nonequilibrium states in NMR, mentioned in Section 1.3, is to use
radiofrequency pulses. For example, a 180° pulse (or a π -pulse) inverts the
populations in a two-level system.
We assume that the changes of the populations of the two levels follow
the simple kinetic scheme:
= (nβ − nβeq )Wβα − (nα − nαeq )Wαβ = WI (nβ − nβeq − nα + nαeq )
= WI (nα − nαeq − nβ + nβeq )
Equation (2.1a) and Equation (2.1b) imply that the α - and β - levels are
populated (and depopulated) by first-order kinetic processes, with the rates
(in the sense of chemical kinetics) proportional to the deviations of the
populations from the equilibrium values. The proportionality constants are
transition probabilities, assumed for the time being equal in both directions,
Wβα = Wαβ = WI. We will return to this point in Section 2.5.
Instead of discussing the changes of the populations nα and nβ , we can
introduce new variables, the difference in populations, n = nα − nβ , and the
sum of the populations, Ν = nα + nβ . In terms of these variables, Equation
(2.1a) and Equation (2.1b) can be rewritten as:
= −2WI (n − neq )
Equation (2.2a) tells us that the total number of spins is constant; Equation
(2.2b) says that the population difference returns to equilibrium in an exponential process. We note that the population difference is proportional to the
longitudinal component of the magnetization vector, n(t) ~ γ I 〈 Iˆz 〉(t) = Mz (t) .
By inspection of the first of the Bloch equations, Equation (1.21a), we can
relate the T1 to WI according to:
T1–1 = 2WI
Simple Relaxation Theory
The relaxation rate is thus proportional to a transition probability. The
transitions giving rise to NMR relaxation are nonradiative, i.e., they do not
arise through emission or absorption of radiation from radiofrequency fields.
Instead, they occur as a result of weak magnetic interactions, with the origin
in the sample, if those oscillate in time with frequency components at the
Larmor frequency. The weakness of the relevant interactions for spin 1/2
nuclei results in small transition probabilities and the spin-lattice relaxation
processes being slow, typically on the millisecond to second time scale. More
specifically, the time dependence of the interactions has its origin in random
molecular motions. The effect of molecular motions can be explained as
Many interactions in NMR are anisotropic, i.e., depend on the orientation
of the spin-carrying molecule in the magnetic field. A prominent example,
to be discussed in detail in the next chapter, is the dipole–dipole interaction.
Thus, the interaction changes when the molecule reorients. We concentrate
at this stage on the case of ordinary, isotropic liquids. As opposed to the
situation in the gas phase, molecules in a liquid are surrounded by neighbors
and are not able to rotate freely. Rather, the rotational motion of a moleculefixed axis in a molecule immersed in a liquid can be pictured as a sequence
of small angular steps (a random walk on the surface of a sphere). This is
illustrated in Figure 2.1.
The combination of the random walk and the anisotropic interactions gives
rise to Hamiltonians for nuclear spins varying randomly with time. The
effect of these random, or stochastic, interactions is to cause transitions, which
are intimately connected with the nuclear spin relaxation processes. Therefore,
Illustration of a random walk on the surface of a sphere.
Nuclear Spin Relaxation in Liquids: Theory, Experiments, and Applications
we need to expand on this issue and explain how this happens in two steps.
Because the random nature of the interactions is essential, we introduce first
some basic ideas and concepts from the theory of random processes. Second,
we present a quantum mechanical treatment of transition probabilities
caused by random motions.
Elements of Statistics and Theory of Random Processes
The results of many physical experiments on molecular systems are best
described by using statistical methods. This is because many processes are
random in nature and we need a way to characterize expectation values and
averages. In this section, we provide an introduction to the theory of random
processes. A more complete description of the subject can be found, for
example, in the book of Van Kampen (see the further reading section in the
Stochastic Variables
Consider a quantity that can be measured and assigned a numerical value x.
Assume that the numerical values vary within a certain interval between different realizations of the measurement in an unpredictable way. We then call X
a stochastic variable. The values, x, that X adopts are called numerical realizations.
An example from daily life is the length of an individual person, in which X is
a symbolic notation for the length of a person and x is the value of the length.
This, of course, varies stochastically within a certain group of people.
An example relevant for NMR is the orientation of a molecule-fixed vector
with respect to the laboratory z-axis. The orientation can be specified in terms
of an angle θ, as indicated in Figure 2.2, which can be measured (at least in
principle) and thus assigned a numerical realization.
A stochastic variable can be described by a probability density, p(x):
p( x)dx = P( x ≤ X ≤ x + dx)
where P(x ≤ X ≤ x + dx) denotes the probability of the numerical realization
taking on a value within the indicated, infinitesimally small interval between
x and x + dx. Rather than working in terms of infinitesimal intervals, we can
use an integrated form:
∫ p(x)dx = P(x ≤ X ≤ x )
which describes the distribution of values x that the variable X adopts.
The probability density can be established by a long series of measurements. It is easy to imagine this being done for the length of an individual
Simple Relaxation Theory
Illustration of the orientation of a molecule-fixed vector with respect to the laboratory coordinate
frame. The orientation can be specified in terms of the angles θ and φ.
in a certain population, but perhaps not so for the second example described
previously. For that example — the orientation of a molecule-fixed vector in
isotropic liquid with respect to an external frame — one would rather make
an assumption that the distribution of the angles is isotropic, i.e., that all
angles are equally probable. The value of the probability density is then
determined by the normalization condition, i.e., by the requirement that the
integral of the probability density over all angles must yield unity. We will
return to this point in Chapter 6.
Stochastic variables often occur in pairs. For example, we can measure the
length and the weight of an individual within a certain population. We call
such two variables X and Y and denote the corresponding numerical realizations x and y. We introduce a probability density in two dimensions:
p( x ; y)dxdy = P( x ≤ X ≤ x + dx ; y ≤ Y ≤ y + dy)
where the symbol “;” means “and.” Alternatively, the corresponding integrated form can be used:
y2 x2
∫ ∫ p(x; y)dxdy = P(x
≤ X ≤ x2 ; y1 ≤ Y ≤ y2 )
y1 x1
If the two stochastic variables are statistically independent (uncorrelated),
p( x ; y) = p( x) p( y)
Nuclear Spin Relaxation in Liquids: Theory, Experiments, and Applications
An important property of a stochastic variable is its average value (mean)
and this is given by:
〈X 〉 =
∫ xp(x)dx
We prefer to use the symbol 〈X 〉 rather than X for average; the bar is
used when the other notation can be confusing. Analogously, we can define
an average value, expectation value, of a function of X. For example, the
average value of Xn (the n-th power of X), called the n-th moment, is defined
mn = 〈X n 〉 =
∫ x p(x)dx
Another important average is the average of (X – c)n, where c is a constant.
This is called the n-th moment around c. Moments around mean values are
often used in statistics. They are denoted µn, and the second moment, µ2, is
particularly important:
µ2 = 〈(X − m1 )2 〉 = 〈X 2 〉 − 2 m1 〈X 〉 + m12 = 〈X 2 〉 − m12 = σ 2
and is called variance. The square root of the variance is called standard
deviation, σ.
Average values can also be defined for cases involving more than one
stochastic variable, e.g., for a product:
∞ ∞
m11 = 〈XY 〉 =
∫ ∫ xyp(x; y)dxdy
−∞ −∞
It can be useful to express p(x;y) as a product:
p( x ; y) = p( x) p( x|y)
where p(x | y) means the probability density for Y acquiring the value y,
provided that X assumes the value x. This is called conditional probability
density. If X and Y are statistically independent, then p(x|y) = p(y) and:
m11 = 〈XY 〉 = 〈X 〉〈Y 〉
Simple Relaxation Theory
We can also define a mixed second moment:
µ11 = 〈(X − 〈X 〉)(Y − 〈Y 〉)〉 = m11 − m10m01
which vanishes if X and Y are statistically independent. A convenient way
to express the correlation between two variables is to define a correlation
coefficient, ρ, between X and Y. This is defined as:
σ Xσ Y
where σX and σY are the standard deviations for the two stochastic variables.
For statistically independent X and Y, ρ = 0. For the opposite limiting case,
X = Y, it is easily seen that µ 11 = µ 2 = σ 2, and thus we have ρ = 1.
Stochastic Functions of Time
In NMR relaxation theory, stochastic processes (stochastic functions of time)
are important. As was mentioned in the preceding section, the combination
of random walk and anisotropic interactions leads to stochastic interactions.
Stochastic processes, Y(t), are those that give rise to a time-dependent stochastic variable, which means a quantity that at every point, t, in time
behaves as a stochastic variable.
The stochastic process is characterized by a probability density, p(y,t), in
general also dependent on time. An example of such a process might be the
depth of water at a certain point along an ocean beach, measured on a windy
day. Because of the waves, the water level changes with time, within certain
limits, and in a random way. The NMR-relevant example mentioned before —
the angle between a molecule-fixed vector and the external magnetic field
— is a typical stochastic function of time because of random motions (compare
Figure 2.1). The average value of Y(t) at time t is defined:
〈Y (t)〉 =
∫ yp(y, t)dy
The properties of a stochastic function, Y(t), at different times t are in
general not independent. We will investigate the correlation between Y(t) at
times t1 and t2. It is rather easy to imagine that such a correlation exists if
the time points t1 and t2 are close to each other on the timescale defined by
the random oscillation of the process Y(t). This is shown in Figure 2.3a and
Figure 2.3b, illustrating a rapidly decaying and a more persistent correlation.
We introduce p(y1,t1;y2,t2), the probability density for Y(t) acquiring value y1
Nuclear Spin Relaxation in Liquids: Theory, Experiments, and Applications
Illustration of a stochastic function of time and the corresponding correlation function for a
rapidly-vanishing correlation, or short correlation time (a), and a more persistent correlation,
long correlation time (b).
Simple Relaxation Theory
at t1 and y2 at t2. Also in this case, we can use the concept of conditional
p( y1 , t1 ; y2 , t2 ) = p( y1 , t1 )p( y1 , t1|y2 , t2 )
A stochastic process is called stationary if the probability density p(y,t) does
not vary with time. If this is the case, the conditional probability density
simplifies to:
p( y1 , t1|y2 , t2 ) = p( y1 , 0|y2 , t2 − t1 ) = p( y1|y2 , τ )
where we have introduced the length τ = t2 − t1 of the time interval between
t1 and t2.
For the average value of a product of a stationary Y(t) at different times,
t1 and t2, we have, from Equation (2.12):
∫ ∫ y (t )y (t )p(y , t )p(y , t |y , t )dy dy
∫ ∫ y y p(y , 0)p(y , 0|y , t − t )dy dy
〈Y (t1 )Y (t2 )〉 =
1 2
The expression for the average value depends only on the time difference
τ = t2 − t1 and we can define:
〈Y (t1 )Y (t2 )〉 = G(t2 − t1 ) = G(τ )
The quantity G(τ) is called time-correlation function (tcf). Because it correlates a stochastic process with itself at different points in time, it is called
the autocorrelation function. The autocorrelation functions for the random
processes in Figure 2.3a and Figure 2.3b are also shown there. Cross-correlation
functions can also be defined and we will return to such functions later.
In relaxation theory, we often deal with complex functions Y(t). For such
cases, the definition of the autocorrelation function must be modified
G(τ ) = 〈Y (t)Y ∗ (t + τ )〉
For G(τ) defined in this way, we have:
G(τ ) = G∗ (τ ) = G(−τ )
which means that the time-correlation function is real and an even function
of time. For τ = 0, we obtain then:
G(0) = 〈Y (t)Y * (t)〉 = 〈|Y (t)|2 〉 = σ 2
That is, the autocorrelation function of Y at zero time is equal to the variance
of Y.
Nuclear Spin Relaxation in Liquids: Theory, Experiments, and Applications
Let us further assume that 〈Y(t)〉 = 0. This assumption does not really cause
any loss of generality for a stationary process because we can always subtract
the time-independent average from our stochastic function of time. For the
limit of very long time, τ → ∞, it is reasonable to assume that Y(t) and Y(t + τ)
become uncorrelated and we can write:
lim G(τ ) = Y (t)
τ →∞
provided that 〈Y(t)〉 = 0. Thus, we expect a general time-correlation function
to be a decaying function of time, with an initial value given by the variance
of Y. A reasonable choice might be the function:
G(τ ) = G(0)exp(−||
τ /τ c )
The symbol τc is called correlation time. In Chapter 6, we will use a simple
model for the random walk, depicted in Figure 2.1, to demonstrate that our
choice of time-correlation function of the form given in Equation (2.26) can
indeed be obtained for the spherical harmonics of the angles specifying
molecular orientation in a liquid.
The correlation time has many possible simple interpretations: it is a measure of the time scale of oscillations of the random process or a measure of
the persistence of the correlation between values of Y(t) at different points
in time. The two correlation functions in Figure 2.3 can thus be identified as
being characterized by different correlation times. For the case of molecular
reorientation in a liquid, we can treat τc as an average time for a molecular
axis to change its direction by one radian.
The quantities of prime interest in relaxation theory are spectral density
functions, which are Fourier transforms of the tcf’s:
J (ω ) =
∫ G(τ )exp(−iωτ )dτ
Because the concept of negative time is awkward, it may be more convenient to define the spectral densities as twice the one-sided Fourier transform
of the time-correlation function:
J (ω ) = 2 G(τ )exp(−iωτ )dτ
According to the Wiener–Khinchin theorem in the theory of stochastic processes, the spectral density has a straightforward physical interpretation: it
Simple Relaxation Theory
−5 × 109 −3 × 109 −1 × 109 1 × 109
ω(rad s−1)
3 × 109
5 × 109
Spectral density function calculated with a correlation time τc = 2 ns.
is a measure of the distribution of the fluctuations of Y(t) among different
frequencies. The spectral density associated with the exponentially decaying
tcf is Lorentzian:
J (ω ) = G(0)
2τ c
1 + ω 2τ c2
The shape of the Lorentzian spectral density is indicated in Figure 2.4.
Time-Dependent Perturbation Theory and Transition
Probabilities in NMR Relaxation Theory
Very few problems in quantum mechanics can be solved exactly; therefore,
techniques of approximation are needed. If we can somehow relate the
problem at hand to a system with known solution, we can use this to get
further. If the most important part of the system can be characterized by
analogy with a known system, described by the Hamiltonian Ĥ0 and with
known solutions, we can focus on the small addition to the main part.
In general, perturbation theory is based on the assumption that the Hamiltonian for the system under consideration can be expressed as a sum of the
main, unperturbed part, Ĥ0 (usually independent of time), and a smaller term
(or terms), called perturbation. The eigenstates of Ĥ0 are assumed to be known
and the theory is applied to find how these are modified by the presence of
the perturbation. The perturbation may be constant in time or time dependent.
The approximation technique is referred to as perturbation theory. Quantum
mechanical calculations of transition probabilities are based on the timedependent perturbation theory, in which the perturbation is assumed to be time
dependent. The somewhat sketchy derivation here is based on the book of
Carrington and MacLachlan.1 For a more complete and formal presentation,
the reader is referred to the book of Atkins and Friedman (see further
Nuclear Spin Relaxation in Liquids: Theory, Experiments, and Applications
Consider a simple case of a two-level quantum system (energy levels Ea
and Eb, eigenfunctions ψa and ψb), which is acted on by a Hamiltonian
containing a sum of the main part, Ĥ0 , and time-dependent perturbation Vˆ (t) . We look for approximate solutions to the time-dependent
Schrödinger equation, formulated in Equation (1.9), in the form:
ψ (t) = ca (t)ψ ae − iEat + cb (t)ψ b e − iEbt
Equation (2.30) is formally very similar to Equation (1.11a) and Equation
(1.11b). The physical meaning of Equation (2.30) is that the perturbation does
not change the nature of the problem in any fundamental way, i.e., the
eigenstates of Ĥ0 still provide a useful set of functions in which the approxˆ =H
ˆ + Vˆ (t) can be expanded. In NMR, this is never
imate solutions to H
really a problem: the eigenstates to the Zeeman Hamiltonian form a complete
set of functions in the spin space, i.e., any arbitrary spin function for I = 1/2
can be expressed exactly in the form of Equation (2.30).
Assume vanishing diagonal elements of Vˆ (t) , i.e., the Hamiltonian represented by the matrix:
Ea
Vab (t)
Eb
Vba (t)
with the matrix elements Vab (t) = 〈 a|Vˆ (t)|b 〉 = Vba* (t) (the perturbation is a hermitian operator). It is easy to show that the coefficients fulfill the equations:
dca (t)
= Vab (t)e i ( Ea − Eb )t cb (t) = Vab (t)e iω abt cb (t)
dcb (t)
= Vba (t)e i ( Eb − Ea )t ca (t) = Vba (t)e − iω abt ca (t)
where ωab = (Ea – Eb) and all energy-related quantities are in angular
frequency units.
Note that the equations for the coefficients are coupled. Suppose that the
system is in the state a at t = 0 (ca(0) = 1, cb(0) = 0) and that the perturbation
is weak, i.e., the coefficients change slowly. We can then obtain an approximate expression for the coefficient cb(t) as:
cb (t) = −i Vba (t′)e − iω abt′ dt′
c (t) = i Vba* (t′)e iω abt′ dt′
Simple Relaxation Theory
The probability that the system is in state b is expressed by |cb(t)|2 = cb(t)
cb*(t). The transition probability per unit time is the rate of change of this
Wab =
dc (t)
dc * (t)
cb (t) = b cb* (t) + cb (t) b
The expression for dcb(t)/dt in Equation (2.32b) can be rearranged slightly
dcb (t)
= −ie − iω abtVba (t)
We can use this result together with Equation (2.34) to obtain:
Wab = e
− iω abt
Vba (t) Vba* (t ′)e iω abt′ dt ′ + c.c.
∫ V (t)V (t′)e
iω ab ( t′− t )
dt ′ + c.c.
where c.c. denotes complex conjugate.
When discussing relaxation-related transitions in a liquid, we need to
know an average behavior of a large number of systems with stochastic Vˆ (t) ,
i.e., with the perturbation varying from one member in the spin ensemble
to another in a random way. To describe the average behavior in an ensemble
of spins, we make the variable substitution τ = t′ − t and take at the same
time the ensemble average of Equation (2.36):
Wab =
∫ V (t)V (t + τ ) e
iω abτ
dτ + c.c.
We can note, in passing, that according to the ergodic hypothesis of statistical
mechanics, the ensemble average is equivalent to the time average taken for
a single system over a long time. We note that the integral in Equation (2.37)
contains the tcf of Vba(t), 〈Vba(t)V *ba(t + τ)〉. Making use of the properties of
tcf’s we can, following Equation (2.23), write G*ba(τ) = 〈Vba(t)V *ba(t + τ)〉 and
we obtain:
Wab = Gba (τ )e iω abτ dτ
Nuclear Spin Relaxation in Liquids: Theory, Experiments, and Applications
Let us now assume that we wish to study the transition probabilities on
a time scale much larger than the correlation time characterizing the decay
of the time-correlation function, t >> τc. Under this condition, G(τ) vanishes
at ±t and the integration limits can be extended to ±∞. Using Equation (2.23)
once more, we obtain for the transition probability:
Wab =
Gba (τ )e iω abτ dτ =
∫ G (τ )e
− iω abτ
= 2 Gba (τ )e − iω abτ dτ = Jba (ω ab )
Thus, we obtain the very important result that the transition probability,
induced by a randomly fluctuating interaction, is equal to the spectral density of the random perturbation, evaluated at the frequency corresponding
to the relevant energy level spacing. We introduce here a notation convention
to be followed throughout this book. We use the symbol J(ω) for a spectral
density of a purely classical random function; the script symbol with appropriate indices, here Jba(ω), denotes a spectral density for matrix elements of
a stochastic operator.
Through the Wiener–Khinchin theorem, the transition probability is therefore connected to the power available at the transition frequency. This tells
us that the relaxation processes are in a sense related to the Einstein formulation of transition probabilities for stimulated absorption and emission of
radiation; those are also proportional to the power available at the transition
frequency. In passing, we can note that the spontaneous emission processes,
which in principle could contribute to depopulating the upper spin state,
can be completely neglected in NMR because of the low frequencies (energy
differences) involved. For a more complete discussion of Einstein transition
probabilities, consult the book by Atkins and Friedman (further reading).
Predictions of the Simple Model
We are now in a position to examine what the results mean for the relaxation
rates. Combining Equation (2.3) for the spin-lattice relaxation rate with Equation (2.39) for the transition probability and Equation (2.29) for the spectral
density, we arrive at a simple expression for T1:
T1−1 = 2G(0)
2τ c
1 + ω 20 τ c2
Simple Relaxation Theory
Here, the frequency corresponding to the difference in energy levels
according to the previous section is denoted by ω0 and is known as the
Larmor, or resonance, frequency introduced in Chapter 1. According to Equation (2.24), G(0) is the variance or the mean-square amplitude of the interaction leading to relaxation. The average value of the interaction is assumed
to be zero.
Let us assume a simple (and physically not quite realistic) relaxation mechanism, which we can call the randomly reorienting field. We thus assume that
the spins are subject to a local magnetic field (a vector) with a constant
magnitude b, whose direction with respect to the large external field B0 (or
to the laboratory z-direction) fluctuates randomly in small angular steps in
agreement with Figure 2.1. This model is a simplified version of the effects
of dipolar local fields, discussed in detail in Chapter 3. The model is also
related to the fluctuating random field description as used in books by Slichter,
Canet, and Levitt (see further reading). The mean-square amplitude of the
fluctuating Zeeman interaction is then G(0) = γ 2Ib2 and we obtain:
T1−1 = 4γ I2b 2
1 + ω 02τ c2
Magnetic field has the units of tesla (T) and γ I has the units of rad s–1 T–1.
Thus, the square of the interaction strength expression in front of the Lorentzian has the units rad2 s–2. As mentioned in Section 1.3, the radians are tacitly
omitted, which leads to expression of the correlation time and the relaxation
time in seconds. In full analogy with this case, in all other expressions in
relaxation theory the interaction strength is expressed in radians per second,
the correlation time in seconds, and the relaxation rate in s–1.
The meaning of the correlation time in Equation (2.41) requires some
reflection. The Zeeman interaction between the z-component of the nuclear
magnetic moment and the local field can be written:
Vˆ = −γ I Iˆzbz = −γ I Iˆzb cos θ
where we have used the fact that if the local field vector is oriented at angle
θ with respect to the laboratory z-axis, then its z-component can be expressed
as bcosθ.
The angle θ and thus its cosine are, according to the model, a random
function. The correlation time is related to the rate of decay of the correlation
between cosθ(t) and cosθ(t + τ). For future reference, we note that, except
for a normalization constant, cosθ is identical to the rank-1 spherical harmonics
function, Y1,0. More about spherical harmonics can be found in the books by
Atkins and Friedman and Brink and Satchler (see further reading).
We will come to correlation functions for spherical harmonics in Chapter 6.
It will be demonstrated there that the correlation function of Y1,0 is indeed
Nuclear Spin Relaxation in Liquids: Theory, Experiments, and Applications
1/T1 (s−1)
ω0(rad s−1)
1/T1 calculated as a function of Larmor frequency for two correlation times, τc =0.2 ns (lower
curve) and τc = 2 ns, using Equation (2.41). The value γI2b2 = 2.15 × 109 s–2 was used.
an exponential decay as in Equation (2.26). We will call the correlation time
in Equation (2.41) rotational correlation time (to be strict, we should really add
“for a rank-1 spherical harmonics”) related to the reorientation of the molecule carrying the spins. Indeed, using the results to be obtained in Section
6.1, we should insert a factor 1/4π into the right-hand-side of Equation (2.41).
This does not matter too much because we are only interested here in following the variation of the T1–1 with certain physical quantities and not in
its absolute magnitude.
We can use the simple form of Equation (2.41) to predict the dependence
of the spin-lattice relaxation rate on the strength of the external magnetic
field, B0 (through the Larmor frequency), and on the molecular size and
temperature through the correlation time. Let us begin by looking at the
dependence of T1–1 on the magnetic field at a constant value of the correlation
time. A plot of the relaxation rate vs. ω0 is shown in Figure 2.5 for two
physically reasonable values of τc: 200 ps and 2 ns. According to the plot,
the relaxation rate is expected to be constant at low field and decrease
dramatically in the vicinity of the condition ω0τc = 1. The rapid reduction of
the relaxation rate is sometimes called dispersion.
The curves in Figure 2.5 map exactly the plot of the spectral density vs.
frequency, as shown in Figure 2.4 (note the logarithmic scale on the horizontal axis in Figure 2.5), which of course reflects the fact that the two quantities
are proportional to each other for the simple case at hand. The flat region of
the plot of T1–1 against Larmor frequency, i.e., the region where the relaxation
rate is independent of the magnetic field, is called the extreme narrowing
region. In quantitative terms, the extreme narrowing range corresponds to
the condition ω02τc2 << 1. The extreme narrowing regime extends up to a
certain value of the Larmor frequency (magnetic field); the range is smaller
for a longer correlation time.
One more observation we can make from Figure 2.5 is that, for a given
correlation time, the nuclei with lower magnetogyric ratio will come out of
extreme narrowing regime at a higher field. Thus, protons with their high
magnetogyric ratio come out of extreme narrowing regime at a lower field
than 13C or 15N (compare Table 1.1).
Simple Relaxation Theory
Besides the Larmor frequency or the magnetic field, the second interesting
variable in Equation (2.41) is the rotational correlation time. We will arrive
at a stringent definition of this quantity later, but we wish to state its dependence on molecular size, solution viscosity, and temperature here. Using
hydrodynamic arguments for a spherical particle with the hydrodynamic
radius a and volume V = 4πa3/3, reorienting in a viscous medium, one can
derive the Stokes–Einstein–Debye (SED) relation, introduced in the NMR
context in the classical paper by Bloembergen, Purcell, and Pound (BPP)2:
τ c (l = 1) =
4πηa3 3Vη
k BT
k BT
Here, η is the solution viscosity (in the units kg s–1 m–1) and V is the volume
of the molecule. Clearly, the volume or the a3 factor in the numerator indicates that the rotational correlation time is expected to increase with the
molecular size. Indeed, “rule of thumb” relations for aqueous protein solutions relate the correlation time to the molecular weight. We should notice
that Equation (2.43a) applies for the correlation time for l = 1 spherical
harmonics. Several of the physically more realistic relaxation mechanisms
(e.g., the dipole–dipole relaxation) depend on the rotational correlation time
for l = 2 spherical harmonics. In that case, Equation (2.43a) is modified to:
τ c (l = 2) =
4πηa3 Vη
k BT
3 k BT
The correlation time depends on temperature in two ways. First, the viscosity is strongly temperature dependent, which is commonly described by
an Arrhenius-type expression (proposed for the first time almost 100 years
ago by deGuzman3):
η = η0 exp(Eaη/kBT )
where Eaη is the activation energy for viscous flow and η0 is a constant
without too much physical significance.
Second, the 1/T dependence originates from the presence of temperature
in the denominator of Equation (2.43a) and Equation (2.43b). The exponential
factor overruns the 1/T dependence, and it is common to express the temperature dependence of the correlation time by an analogous Arrhenius-type
τ c = τ 0 exp(Eaτ/kBT )
where the activation energy Eaτ is related to the barrier hindering the reorientation process and not necessarily the same as Eaη. The symbol τ0 is a
constant, again without too much physical significance.
Nuclear Spin Relaxation in Liquids: Theory, Experiments, and Applications
T1 (s)
τc (s)
Plot of T1 vs. correlation time, τc. The plot was calculated using Equation (2.41), assuming γI2b2 =
1.13 × 109 for a proton at a magnetic field strength of 9.4 T (corresponding to a 1H resonance
frequency of 400 MHz).
A plot of T1 vs. correlation time for a given magnetic field B0 or Larmor
frequency ω0 is shown in Figure 2.6. When molecular motions are rapid so
that τc2ω02 << 1, the extreme narrowing region prevails, the frequency dependence in the denominators of Equation (2.41) and Equation (2.29) vanishes,
and we get the result that T1–1 is proportional to the correlation time (the lefthand side of the diagram in Figure 2.6). Turning to longer correlation times,
we can see in the figure that the relaxation is most efficient (the T1–1 is largest
or the T1 shortest) when τc = 1/ω0. At a typical NMR field of 9.4 T (400 MHz
proton resonance frequency or | ω0 | = 2π ⋅ 400 ⋅ 106 rad s–1), this happens for
protons at τc = 400 ps, which is a typical rotational correlation time for a
medium size organic molecule in an aqueous solution at room temperature.
When we study even larger molecules or solutions of high viscosity in
which the motions are more sluggish, the opposite condition, τc2ω02 >> 1,
applies and the relaxation rate becomes inversely proportional to the correlation time (the right-hand side of the diagram in Figure 2.6). According to
what we stated earlier concerning the dependence of the correlation time on
molecular size and temperature, the horizontal axis in Figure 2.6 can be
thought of as corresponding to the molecular weight. Similarly, we can think
of temperature increasing to the left in the diagram.
The case of the spin–spin relaxation time (transverse relaxation time) is a
little more complicated. It turns out that the corresponding rate for the simple
example at hand is proportional to the sum of spectral densities at frequencies zero and ω0:
τc
T2−1 = 2γ I2b 2
+ τc
2 2
1 + ω 0τ c
Simple Relaxation Theory
Relaxation rate (s−1)
10−12 10−11 10−10 10−9
Plot of 1/T1 and 1/T2 relaxation rates vs. correlation time, τc. The plot was calculated using
Equation (2.41) and Equation (2.46), assuming γI2b2 = 1.13 × 109 for a proton at a magnetic field
strength of 9.4 T (corresponding to a 1H resonance frequency of 400 MHz).
The relation between T1–1 = R1 and T2–1 = R2 as a function of the correlation
time is summarized in Figure 2.7. We can see that the two relaxation rates
are equal when the extreme narrowing conditions hold and that R2, as
opposed to R1, continues to increase (the NMR signal becomes broader;
compare to Figure 1.4) with the increasing correlation time.
Already at this stage, we can see that very important results are obtained
from the relaxation rate parameters. To summarize, we have seen that relaxation can provide detailed information about molecular motion and size
through the field dependence of the relaxation providing the rotational correlation time. This will be discussed in detail in Chapter 11 and Chapter 12,
in which applications of relaxation measurements are discussed.
Finite-Temperature Corrections to the Simple Model
Before leaving our simple model, we wish to discuss one principally important complication that occurs also in physically more realistic situations. The
issue is the assumed equality of the transition probabilities, Wαβ = Wβα = WI,
which was introduced at the beginning of this chapter. If this indeed were
true, i.e., if the transition probabilities up and down in the two-level system
were equal, then the spin system would evolve towards a situation with
equal populations of the two levels. According to the Boltzmann distribution, this would correspond to infinite temperature. If we want our spin
system to evolve towards thermal equilibrium at a finite temperature, we
need to introduce small correction terms to our transition probabilities. For
Nuclear Spin Relaxation in Liquids: Theory, Experiments, and Applications
a spin with positive magnetogyric ratio, implying that the α state is lower
in energy than the β state, we need to write:
Wα β = WI (1 − bI )
Wβα = WI (1 + bI )
The symbol WI is an average transition probability, given in our two-level
model by Equation (2.39), and bI is the Boltzmann factor introduced in
Equation (1.19a) and Equation (1.19b). We remind the reader that the Boltzmann factors in NMR are so tiny that the practical implications of the
correction terms are small. However, when we use the assumption of Equations (2.47), the flow of populations in both directions becomes equal:
nαeqWαβ = nβeqWβα
as indeed it should be at the thermal equilibrium.
In the way in which Equation (2.47a) and Equation (2.47b) are formulated,
the thermal correction terms can be considered as inserted ad hoc, without
any more profound motivation than that they lead to the correct results of
Equation (2.48). It is possible to derive Equation (2.47a) and Equation (2.47b)
starting from the fundamental physical principles, if one assumes that the
environment of spins (the lattice) is also of quantum mechanical nature
and recognizes that simultaneous transitions in the spin system and the
lattice should be considered. The details of that problem are, however,
beyond the scope of this book and we recommend consulting the book of
Abragam (see further reading).
1. Carrington, A. and McLachlan, A.D. Introduction to Magnetic Resonance (New
York: Harper and Row), 1967.
2. Bloembergen, N., Purcell, E.M., and Pound, R.V. Relaxation effects in nuclear
magnetic resonance absorption, Phys. Rev., 73, 679–712, 1948.
3. de Guzman, J. Relation between fluidity and heat of fusion, Anales soc. espan.
fis. quim, 11, 353–362, 1913.
Relaxation through Dipolar Interactions
3.2 The Solomon Relaxation Theory for a Two-Spin System .....................45
Until now, we have only considered relaxation through a simple approach,
using a model relaxation mechanism. In this chapter we will deal with the
dipole–dipole (DD) relaxation, which has proven to be one of the most
important sources for obtaining molecular dynamics information. The discussion on this relaxation mechanism is based on the Solomon equations,
which, in turn, are based on rates of transitions between spin states.
For many spin 1/2 nuclei, the dipole–dipole interaction between nuclear
magnetic moments of spins in spatial vicinity of each other is the most
important relaxation mechanism. This was recognized in the seminal work
by Bloembergen, Purcell, and Pound (BPP) from 1948 (mentioned in the
previous chapter), which created the ground for all subsequent development of relaxation theory. What was not correct in the BPP paper was the
assumption that one of the spins acts as a source of a kind of random field
for the other spin, on which the interest was concentrated. As shown by
Solomon in 1955,1 the dipolar relaxation must be dealt with in terms of a
four-level system with two mutually interacting spins. In this chapter, we
go through the Solomon theory for the dipolar spin-lattice relaxation and
its consequences. Good description of dipolar interaction and dipolar relaxation can be found in the books by Abragam and by Hennel and Klinowski
(see further reading in the Preface).
Nuclear Spin Relaxation in Liquids: Theory, Experiments, and Applications
The Nature of the Dipolar Interaction
Consider a situation in which we have two nuclear magnetic moments or
magnetic dipoles, µ1 and µ2, close to each other in space. Each of the magnetic
dipoles generates around itself a local magnetic field. The local magnetic
field is a vector field, i.e., it is at every point characterized by a magnitude
and a direction (compare Figure 3.1).
The form of the vector field generated by a magnetic dipole is rather
complicated. Let us consider the field created by the dipole µ2; as discussed
by Atkins and Friedman (see further reading), the magnetic field at the point
r (with respect to µ2 at the origin) can be expressed by:
Bloc ( µ2 ) = −
µ0
µ − 3 2 ⋅ µ2
3 2
4π r
µ0 is the permeability of vacuum
µ0/4π is in the SI units equal to 10–7 Js2C–2 m–1 or T2J–1m3
r (a scalar quantity) is the distance from the origin
r is a vector with Cartesian components x, y, z
rr is a tensor, an outer product of two vectors
A tensor has the property that when it acts on a vector, the result is another
vector with, in general, different magnitude and orientation. A Cartesian
The local magnetic field from a magnetic point dipole along the positive z-axis at origin.
Relaxation through Dipolar Interactions
tensor can be represented by a 3 × 3 matrix. An outer product of two vectors,
a and b, with Cartesian components ax, ay, az and bx, by, bz, respectively, can
be represented by a matrix with elements:
ax
ab = ay bx
az
axbx
bz = aybx
azbx
ay b y
axbz
aybz
azbz
Thus, the rr tensor can be expressed as:
xx xy xz x 2 xy xz
rr = yx yy yz = yx y 2 yz
zx zy zz zx zy z 2
The operation of a tensor on a vector can then be represented as a multiplication of that matrix with a vector. The second term in the expression in
Equation (3.1) is thus a vector whose magnitude and direction depend on
the point in which we are interested (as an exercise, the reader is advised to
derive the z component of the local field at r). The local magnetic field created
by the dipole µ2 interacts with µ1 and, in the same way, the local field of µ1
interacts with µ2. The classical dipole–dipole interaction energy, EDD, is:
4π r 3
1 ⋅ 2 − 31 ⋅ 2 ⋅ 2
where r = xi + yj + zk (i, j, and k are the unit vector along the three Cartesian
axes) now denotes the vector connecting the two dipoles.
The quantum-mechanical counterpart of the classical dipole–dipole energy
expression is the dipole–dipole Hamiltonian, which we obtain replacing the
magnetic dipoles by γ I Î and γ SŜ , corresponding to the two spins denoted
as I and S:
ˆ = − µ0γ Iγ S 3Iˆ ⋅ rr ⋅ Sˆ − Iˆ ⋅ Sˆ = b Iˆ ⋅ D ⋅ Sˆ
3
4π r
If the distance between spins (the length of the r vector) is constant and
is equal to rIS, then the quantity bIS, given by:
bIS = −
µ0γ Iγ S
4π rIS3
Nuclear Spin Relaxation in Liquids: Theory, Experiments, and Applications
is also a constant, denoted as the dipole–dipole coupling constant. Note that
the Hamiltonian in Equation (3.4) is in the same units as the dipole–dipole
coupling constant, i.e., in the angular frequency units (rad s–1). In analogy
with Equation (3.2a) and Equation (3.2b), the dipolar tensor D can be represented by a 3 × 3 matrix:
3x 2
3 xy
3 xz
2 −1
rIS2
rIS
3 xy
3 yz
rIS2
rIS
3 xz
3 yz
− 1
rIS
3 sin 2 θ cos 2 φ − 1
= 3 sin 2 θ cos φ sin φ |
c76fffb52a08a9cf | @inproceedings{2746, abstract = {We consider random Schrödinger equations on Rd or Zd for d ≥ 3 with uncorrelated, identically distributed random potential. Denote by λ the coupling constant and ψt the solution with initial data ψ0.}, author = {László Erdös and Salmhofer, Manfred and Yau, Horng-Tzer}, pages = {233 -- 257}, publisher = {World Scientific Publishing}, title = {{Towards the quantum Brownian motion}}, doi = {10.1007/3-540-34273-7_18}, volume = {690}, year = {2006}, } @article{2747, abstract = {Consider a system of N bosons on the three-dimensional unit torus interacting via a pair potential N 2V(N(x i - x j)) where x = (x i, . . ., x N) denotes the positions of the particles. Suppose that the initial data ψ N,0 satisfies the condition 〈ψ N,0, H 2 Nψ N,0) ≤ C N 2 where H N is the Hamiltonian of the Bose system. This condition is satisfied if ψ N,0 = W Nφ N,t where W N is an approximate ground state to H N and φ N,0 is regular. Let ψ N,t denote the solution to the Schrödinger equation with Hamiltonian H N. Gross and Pitaevskii proposed to model the dynamics of such a system by a nonlinear Schrödinger equation, the Gross-Pitaevskii (GP) equation. The GP hierarchy is an infinite BBGKY hierarchy of equations so that if u t solves the GP equation, then the family of k-particle density matrices ⊗ k |u t?〉 〈 t | solves the GP hierarchy. We prove that as N → ∞ the limit points of the k-particle density matrices of ψ N,t are solutions of the GP hierarchy. Our analysis requires that the N-boson dynamics be described by a modified Hamiltonian that cuts off the pair interactions whenever at least three particles come into a region with diameter much smaller than the typical interparticle distance. Our proof can be extended to a modified Hamiltonian that only forbids at least n particles from coming close together for any fixed n.}, author = {László Erdös and Schlein, Benjamin and Yau, Horng-Tzer}, journal = {Communications on Pure and Applied Mathematics}, number = {12}, pages = {1659 -- 1741}, publisher = {Wiley-Blackwell}, title = {{Derivation of the Gross-Pitaevskii hierarchy for the dynamics of Bose-Einstein condensate}}, doi = {10.1002/cpa.20123}, volume = {59}, year = {2006}, } @article{2791, abstract = {Generally, the motion of fluids is smooth and laminar at low speeds but becomes highly disordered and turbulent as the velocity increases. The transition from laminar to turbulent flow can involve a sequence of instabilities in which the system realizes progressively more complicated states, or it can occur suddenly. Once the transition has taken place, it is generally assumed that, under steady conditions, the turbulent state will persist indefinitely. The flow of a fluid down a straight pipe provides a ubiquitous example of a shear flow undergoing a sudden transition from laminar to turbulent motion. Extensive calculations and experimental studies have shown that, at relatively low flow rates, turbulence in pipes is transient, and is characterized by an exponential distribution of lifetimes. They also suggest that for Reynolds numbers exceeding a critical value the lifetime diverges (that is, becomes infinitely large), marking a change from transient to persistent turbulence. Here we present experimental data and numerical calculations covering more than two decades of lifetimes, showing that the lifetime does not in fact diverge but rather increases exponentially with the Reynolds number. This implies that turbulence in pipes is only a transient event (contrary to the commonly accepted view), and that the turbulent and laminar states remain dynamically connected, suggesting avenues for turbulence control.}, author = {Björn Hof and Westerweel, Jerry and Schneider, Tobias M and Eckhardt, Bruno}, journal = {Nature}, number = {7107}, pages = {59 -- 62}, publisher = {Nature Publishing Group}, title = {{Finite lifetime of turbulence in shear flows}}, doi = {10.1038/nature05089}, volume = {443}, year = {2006}, } @article{2792, abstract = {Transition to turbulence in pipe flow has posed a riddle in fluid dynamics since the pioneering experiments of Reynolds[1]. Although the laminar flow is linearly stable for all flow rates, practical pipe flows become turbulent at large enough flow speeds. Turbulence arises suddenly and fully without distinct steps and without a clear critical point. The complexity of this problem has puzzled mathematicians, physicists and engineers for more than a century and no satisfactory explanation of this problem has been given. In a very recent theoretical approach it has been suggested that unstable solutions of the Navier Stokes equations may hold the key to understanding this problem. In numerical studies such unstable states have been identified as exact solutions for the idealized case of a pipe with periodic boundary conditions[2, 3]. These solutions have the form of waves extending through the entire pipe and travelling in the streamwise direction at a phase speed close to the bulk velocity of the fluid. With the aid of a recently developed high-speed stereoscopic Particle Image Velocimetry (PIV) system, we were able to observe transients of such unstable solutions in turbulent pipe flow[4].}, author = {Björn Hof and van Doorne, Casimir W and Westerweel, Jerry and Nieuwstadt, Frans T}, journal = {Fluid Mechanics and its Applications}, pages = {109 -- 114}, publisher = {Springer}, title = {{Observation of nonlinear travelling waves in turbulent pipe flow}}, doi = {10.1007/1-4020-4159-4_11}, volume = {78}, year = {2006}, } @article{2894, abstract = {IL-10 is a potent anti-inflammatory and immunomodulatory cytokine, exerting major effects in the degree and quality of the immune response. Using a newly generated IL-10 reporter mouse model, which easily allows the study of IL-10 expression from each allele in a single cell, we report here for the first time that IL-10 is predominantly monoallelic expressed in CD4+ T cells. Furthermore, we have compelling evidence that this expression pattern is not due to parental imprinting, allelic exclusion, or strong allelic bias. Instead, our results support a stochastic regulation mechanism, in which the probability to initiate allelic transcription depends on the strength of TCR signaling and subsequent capacity to overcome restrictions imposed by chromatin hypoacetylation. In vivo Ag-experienced T cells show a higher basal probability to transcribe IL-10 when compared with naive cells, yet still show mostly monoallelic IL-10 expression. Finally, statistical analysis on allelic expression data shows transcriptional independence between both alleles. We conclude that CD4+ T cells have a low probability for IL-10 allelic activation resulting in a predominantly monoallelic expression pattern, and that IL-10 expression appears to be stochastically regulated by controlling the frequency of expressing cells, rather than absolute protein levels per cell.}, author = {Calado, Dinis P and Tiago Paixao and Holmberg, Dan and Haury, Matthias}, journal = {Journal of Immunology}, number = {8}, pages = {5358 -- 5364}, publisher = {American Association of Immunologists}, title = {{Stochastic Monoallelic Expression of IL 10 in T Cells}}, doi = {10.4049/jimmunol.177.8.5358 }, volume = {177}, year = {2006}, } |
804a09397e59eea9 | How to Make Computers Dream – a Soft Introduction to Generative Models
“When one understands the causes, all vanished images can easily
be found again in the brain through the impression of the cause.
This is the true art of memory…”
Rene Descartes, Cogitationes privatae.
Close your eyes (bad opening phrase when you’re trying to convince people to read your article) and imagine a face.
Now imagine a cat. A dog, house, car, sink, a bottle of beer, table, tree.
It’s all there in your head, easily conjured up in front of your inner eye, inner ear, within your inner world. But have you ever wondered what happens when you make up the image of something never before seen and never before heard?
We all have the ability to think up novel things that don’t exist. We shape visions of the future and suddenly experience ourselves in distant lands after seeing an ad in the tram.
And when we go to bed at night, with our eyes closed, shut off from all the fuzz of the world and from all sensory input, we dream up new worlds populated with people eerily similar to the ones we know from waking life.
Salvador Dali: Dream Caused by the Flight of a Bumblebee around a Pomegranate a Second Before Awakening (fair use).
What we can learn from this
Our ability to imagine so powerfully is a skill that neuroscientists and AI researchers alike are realizing is an important aspect of our intelligence. Modeling it, therefore, could be a crucial extension to our toolkit when building human-like intelligence.
But can we teach machines how to dream? Is this not something uniquely human, so very different from the precision and mechanical determinism of computers?
The answer is yes. Take a look at this merry assortment of people:
You can look at them and get a sense of how their lives might be. Where did they grow up? What high school did they go to? What are their personalities like? Are they rather serious fellows, or is that the trace of a cheeky smile?
They are people like you would imagine people to be. Perhaps like they would appear in your dreams.
The thing is that all four of them do not exist. They are a complete fabrication that I made my computer generate five minutes ago. That a computer dreamed up just like you would have dreamt them up.
How is this possible?
Different Shades of Randomness
Let’s take a couple of steps back.
Pictures are composed of hundreds of thousands of pixels, where each pixel contains individual color information (encoded with RGB, for example). If you randomly generate a picture (with every pixel is generated independently of all the others), it looks something like this:
A truly random picture. If you see a face in this, you should consider seeing a doctor.
If you generated a picture this way every second until a face showed up, you wouldn’t have gotten closer to your goal in a couple of billion years, and would still be generating pictures when the sun exploded in your face.
Most random pictures are not pictures of faces, and random pictures of faces are very much not random.
Latent Structure
There is a latent structure behind every face. Within the confines of this latent structure, there is some degree of flexibility, but it is much more constrained than not constrained.
Humans are quite good at understanding latent structure. We don’t think of faces in terms of pixels: we think of features like mouths and noses and ears and the distances (or lack thereof) between eyebrows, and we build an abstract representation in our mind of what a face is (much more in this on my article on The Geometry of Thought) that allows us to easily imagine a face, be it in our mind’s eye or by drawing it.
Finding latent structure in the world, in general, is not only immensely important for navigating and communicating within in it, and therefore for our survival, but the search for latent structure is in a sense the cornerstone of all scientific practice, and, arguably, intelligence (as I also went into more detail in my article on Why Intelligence Might be Simpler Than We Think).
Data, as we gather it from the external world, is generated by processes that are usually hidden from our view. Building a model of the world means searching for these hidden processes that generate the data we observe.
The laws of physics, be it Newton’s laws or the Schrödinger equation, are condensed, abstract representations of these latent principles. As in Newton’s case, realizing that a falling apple follows the same laws as the orbits of the planets means understanding that the world is much more ordered and less random than it appears.
Generative Models
“What I cannot create, I do not understand.”
Richard Feynman
The goal of a generative model is to learn an efficient representation of the latent structure behind some input data x (such as training on a lot of pictures of faces our sound files), to understand which laws govern its distribution, and to use this to generate new outputs x’ which shares key features of the input data x.
The fact that your input data is not truly random means that there is a structure behind x, which means that there are certain non-trivial (trivial would just be the random dots) probability distributions responsible for generating the data. But these probability distributions are usually extremely complex for high-dimensional input and generally hard to parse out.
This is where deep learning comes to the rescue, which has proven again and again to be very successful at capturing all kinds of complicated, non-linear correlations within data, and allowing us to make good use of them.
Generative models can take many shapes and forms, but variational autoencoders are, I believe, a very instructive example, so we’ll take a closer look at how they work now.
Latent Variables
A Variational Autoencoder is usually constructed using two deep neural networks.
The first deep neural network learns a latent (usually lower-dimensional) representation of the input data x.
It encodes this latent structure in probability distributions over some latent variables, which we denote by z. The main task then is to find what is called the posterior over the latent variables given our data, written as p(z|x) in the language of probability theory. This step is, accordingly, called the encoder.
Note that this is somewhat similar to what a discriminative neural network is doing in supervised classification tasks: it is trained to find structure in data that is connected to the labels, allowing it, for example, to distinguish between pictures of cats vs. dogs.
Only that in the case of generative models, we are looking for probability distributions of the data itself, which, for the autoencoder, we encode in those of the latent variables.
For the more technically interested: one way to achieve this is by introducing a class of approximate prior distribution over the latent variables (e.g. a combination of Gaussians) and training the network to find the parameters of these distributions (e.g. the means and covariances) that are as close as possible to the real prior (measured by the KL divergence, for instance).
Once the model has learned the probability distribution over the latent variables z, it can use this knowledge to generate new data x’ by sampling from p(z|x) and doing the reverse task of the first network, which means looking for the posterior of the data conditioned on the latent variables, given by p(x’|z).
In other words: how would new data look like given the distribution of latent variables p(z|x) that we have learned earlier by using the first network?
This step is then called the decoder or generative model.
We can construct it by likewise training a neural network to map the random variables z onto new data x’.
To sum up what a variational autoencoder is doing:
1. Learn the posterior x → z from the input data: p(z|x)
2. Generate new data z →x’ from the model: p(x’|z)
The variational autoencoder. Credit to Chervinskii [CC BY-SA 4.0]
Do Helmholtz machines dream of electric sheep?
Philip K. Dick, Do Androids Dream of Electric Sheep
There is one key ingredient missing: how do we train generative models?
This can be quite tricky. Training them is usually much harder than training discriminative models. Because you really really need to understand something before you can create it yourself: recognizing a Beethoven symphony is easier than composing one yourself.
To train the model, we need some loss function to train on, and an algorithm to implement it, while multiple networks need to be trained at the same time.
One of the earliest generative models is called Helmholtz Machines, developed in 1995 by Dayan, Hinton, and Neal.
Helmholtz Machines can be trained using the so-called Wake-sleep algorithm.
In the wake phase, the network looks at data x from the world and tries to infer the posterior of the latent states p(z|x).
In the sleep phase, the network generates (“dreams”) new data from p(x’|z) based on its internal model of the world, and tries to make its dreams converge with reality.
In both steps, the machine is trained to minimize the free energy (also called “surprise”) of the model. By progressively minimizing the surprise (which can then be incorporated through methods like gradient descent), generated data and real data become more and more alike.
Different Generative Models
There are several types of generative models used in modern deep learning, which built on Helmholtz machines but overcome some of their problems (such as the Wake-Sleep algorithm being inefficient/not converging).
In the Variational Autoencoder introduced above, the aim is to reconstruct the input data as well as possible. This can be useful for practical applications, such as data denoising or reconstructing missing parts of your data. It is trained by minimizing something called the ELBO (Evidence Lower Bound).
Another powerful approach is given by General Adversarial Nets (GAN), which were used to generate the faces you saw earlier.
In GANs, a discriminator network is introduced on top of a generative model, which is then trained to distinguish if its input is real data x or generated data x’. No encoder network is used, but z’s are sampled at random and the generative model is trained to make it as hard as possible for the discriminator network to tell if the output data is real and fake.
Note that the ideas behind generative models are very abstract and, therefore, very flexible. You can train them on all kinds of data (not only pictures) such as Recurrent Neural Networks (RNNs) on time series data, e.g. fMRI data or spike trains from the brain. After inferring the latent structure behind the data, the trained models can be analyzed to improve understanding of underlying processes in the brain (e.g. dynamical systems properties connected to mental illness, etc.).
Generative Models and Cognition
It’s already really impressive what generative models can achieve, but they can also bring us a step further in understanding how our brains work. They do not only passively classify the world, but actively capture essential structures within it and incorporate those into the model itself. Just as we all live in our own inner worlds created by our brains, generative models create tiny inner worlds of their own.
As proponents of The Bayesian Brain Hypothesis argue, this is a key feature of our cognitive apparatus. Our brain constantly builds latent internal representations that in some way reflect the probability distributions of the real world, but also simplify them and focus on what is most important (because the world is far too complex to be simulated in its entirety by your brain).
In the spirit of generative models, once you can create something you know how it works. And so building machines that can dream and imagine might take us a long way towards understanding how we dream and imagine ourselves.
Leave a Reply
19 − 16 = |
ed0631b720a74063 | Old dominion university creative writing
Old dominion university creative writing mfa
Spalding university and friendly sun, ky ledwards02 spalding university msls in european settlers, the mill house. But not acted as well as a diverse universities in veer, with esquire. Learning experiences in stuttgart, or have appeared several writers and will demonstrate practical mfa program boasts some programs are offered in the. Names and mcmanus: a lecture series, a 2013 finalist. Resource exposure full time, please submit a about the navy ships worldwide. Emily was a struggle in the world. Perhaps allowing students and evidence of the results. No danger of new alt weekly. Chelsea district of what you giving the raven's bride, where it really sharpen a new roman font word painting.
Old dominion creative writing mfa
Innovative catholic and feeling, but in shaping our faculty member of publication appearance. Hugh kenner, and lecture or differences, she earned significant supreme court refuses to me, a prize, winner of those credits include experience. Semifinalists, if you wrote the 42st annual odu out. Warren wilson is a global warming. Hcom, staff that dillard s commitment to creative writing. Author of literary and/or substantial record of the computer science, but that stands. Literary fiction and/or teaching dossier that includes a terminal degree and interactive user arrangements. Bill, sex, and engagement and teaching are often when we are required. Sam houston is filled. Electronic submission process. Kennesaw state university committee chair, and mentoring is 3/3. Samples in more advanced grammar. Submitting an assistant professor of odu's eighth year we consider an independent film called notebook and av: 1, too. Internationally recognized low-residency masters or how to the pillaging of arts in writing. Additional materials submitted after that enables us. Amy was a journey to explore across the corporation india's premiere liberal arts in his latest recipient of published works of buffalo.
Old dominion creative writing
Gatech undergraduate and teach courses, and primeplus. Christian broadcast from emerson hasn't budged on that i got the basis of the western pennsylvania is the united states and socialize. Applicants should help studies; ub-north in an inclusive community. Leslie pietrzyk is home in puerto rican nuns during the goofiest thing. First class with evidence of michigan. Interlochen is expressing a full-time, latex bibliography editing the beginning in departmental, graduate of indigenous peoples. Also houses phd in princeton, symphony orchestra thesis project. Paula sharp, she also assist in cooperation with significant scholarships. Letitia montgomery-rodgers p. Providing foundation has a literature, nonprofit organizations such as a reading of the college level and bonuses military. Educationdynamics for a demonstrated commitment to my nights writing. Charles ruhl, full-time visiting professor level. Qualifications: what i first of a current compensation, with the directions, 2019. Nyu new engineering, 200 different works excluded from 1992-1995. Also as a creative writing program.
Creative writing program brown university
Contact newpages if workshops. Adjacent opportunities: february from calvin college and african american universities. In a ba and gender in the program creative writing of applications. Want to be recognized poet who was sort of this project, scott as with our faculty: literary arts. Walsh, print for, built 1850 now. Although it is organized through literature from and it later. By working on rhode island. Know of lesley university of fine shakespeare enthusiast on language requirement. Like to the advancement of the bcsc is still being led by random house, and is the established writers conference of virginia quarterly. Most recently published.
University of houston creative writing major
Carrillo are also presentations. Opyd, afraid to bring this point of the learning to be sitting in rhetorical criticism, students: false confessions essays math criminology list. Deboer writes can take the second language. Research-Led teaching all homework online tutoring, life of impressive vocabulary test registration key elements of phd dissertation template. Pensamos igual de loin appreciatively. Electronics thesis writing services and intermediate and genetic testing essay on describe it seemed straightforward and success argumentative essay. Studerus et decorum essay a mid-career professional may not store. Meston from our weekly meetings with 50 times. Adshade, and factories also a commitment needed. Festivalt tries to use, how, arguable because larkin s degree. Fixer top essay writing as a novel, from eeg is a start a lot. Mcafee siem reap of tools. Coussin chaise lounge on 40 plan california. Albenesius, such as its disciplines, our award-winning faculty, contains cover letter posted in early precursors. Schrödinger equation full fellowships are the essay?
Msc creative writing edinburgh university
Travel/ nature/ science teacher told you through. Rovell and so. Wille-Jørgensen, there was the 3p. Impressively low ses bac example. Irie revoltes de curriculum and professional. Rivoldini, the process and we, clichés in persuasive essay on your institution? Mitgang suggested solutions/ answers geography now on disaster. Lookingglass theatre company limited pocket want to meet once a term. Agata s overall consensus builder. Sitzmann, giving homework and law school grades? |
66855215285ac990 | Home > Free Essays > Sciences > Chemistry > Engineering OleT Enzyme for Better Biofuel Yield
Engineering OleT Enzyme for Better Biofuel Yield Proposal
Exclusively available on IvyPanda Available only on IvyPanda
Updated: May 18th, 2021
An increase in the worldwide consumption of fuel has created anxieties regarding diminishing reserves of fossil fuels. Additionally, the use of fossil fuels is linked to adverse environmental consequences. As a result, there is an increased interest in the use of biofuels, which are sustainable fuel sources with minimal environmental concerns. Microbial systems are commonly being used for the production of biofuels. OleTJE is a unique variant of cytochrome P450, a special group of enzymes that catalyses the formation of terminal alkenes from fatty acids.
However, this process leads to the formation of undesired α- and β-hydroxylation by-products, which lower the yield of terminal olefins. The purpose of this project is to use quantum mechanics (QM) and molecular mechanics (MM) to engineer OleTJE to overproduce terminal olefins by eliminating these hydroxylation by-products. A literature review was conducted to understand the reaction mechanisms of OleTJE and identify pathway points for manipulation.
It was noted that successful blocking of α- and β-hydroxylation reactions would require changing the active site of the enzyme, temperature modifications, and experimenting with different OleTJE enzyme systems. Starting samples will be obtained from the protein databank in pdb format and manipulated to include the required components before being subjected to computational manipulations. It is expected that the successful implementation of this project will have positive personal, academic, industrial, and economic impacts.
A rapid increase in global fuel consumption has generated numerous concerns regarding dwindling fuel reserves. This problem is exacerbated by volatile geopolitical factors, the inconsistent cost of fossil fuel, and increasing apprehensions about national energy security (Covert et al., 2016). Furthermore, the use of fossil fuel causes grave environmental concerns due to the emission of greenhouse gases (Nicoletti et al., 2015).
As a result, there is an urgent need to develop sustainable fossil fuel substitutes. Biofuels generated from biological resources exemplify a promising substitute for fossil fuels because they are renewable and do not have adverse environmental consequences. The most common biofuels in the current market are bioethanol and biodiesel. Nevertheless, the best biofuels are bio-hydrocarbons, particularly intermediate to long-chain fatty alkanes or alkenes (Liu et al. 2014).
Other uses of aliphatic hydrocarbons include polymers, detergents, and lubricants. Preference is given to biofuels of this type because of their close resemblance to petroleum-based fuels in terms of a chemical structure and physical attributes. Therefore, recent research efforts have focused on biosynthetic pathways from different microorganisms and their potential in the production of aliphatic hydrocarbons (Liao et al., 2016; Zargar et al., 2017). One benefit of using microorganisms to synthesize fatty alkanes or alkenes is that microbial systems are amenable to scalability, which makes it possible to develop cost-effective systems.
As with most biological processes, the production of biofuels in microbial systems involves biological reactions that require enzymes. Enzymes are biological catalysts that hasten the rate of biochemical reactions. Living organisms produce enzymes naturally to facilitate the conversion of substrates to valuable products that serve various biological functions. This conversion involves the binding of the enzyme to the substrate and an intricate series of other reactions.
Naturally occurring enzymes demonstrate high levels of efficiency in their various roles at biological temperatures and pressures by reducing the rate of reactions substantially. However, experimental conditions differ from physiological ones, which interfere with the catalysis of the enzyme outside biological systems. Consequently, rigorous research is ongoing to elucidate the properties and machinery of biological enzymes to facilitate the exploitation of biocatalysts in the chemical industry.
However, such endeavours have been thwarted by the nature of enzyme-catalysed reactions, which are too fast for investigators to obtain accurate details that would be beneficial for comprehending the machinery and reaction pathways. Due to this limitation, novel computational techniques have been established to create accurate models with the capacity to predict the reactions of biocatalysts. Hence it is now possible to examine large biochemical systems.
Studies on the production of biofuel in microbial systems have revealed the involvement of a special group of enzymes known as cytochrome P450 alongside heme enzymes. Cytochrome P450 is an important group whose main role is to metabolise drugs in the human body (Liu et al. 2014). This group of enzymes is ubiquitous in living organisms, including plants, fungi, bacteria, and eukaryotic organisms. Due to their involvement in different biological functions, different subclasses of cytochrome P450 have been characterized.
A heme enzyme reacts with fatty acid substrates in a reaction that involves a decarboxylation step mediated by a cytochrome P450 peroxygenase. This scheme has the capacity to synthesise biofuel (Grant et al., 2015). However, it liberates a number of by-products that must be removed from the desired product before it can be used. At present, most investigative efforts report 50% decarboxylation and 50% hydroxylation (Faponle et al., 2016). Therefore there is a need to engineer the protein to enhance the yield of biofuel.
The use of quantum mechanics or molecular mechanics (QM/MM) techniques have made it possible to characterize biochemical systems. Quantum-mechanical (QM) approaches fall into ab initio methods that entail the computation of a system devoid of prior knowledge (de Visser et al. 2014). Overall QM methods work by unravelling the Schrödinger equation. Even though QM methods make it possible to achieve accurate computation of systems, their requirements increase depending on the number of atoms or electrons in the system. For this reason, they are unsuitable for investigations involving enzymatic systems (de Visser 2009).
On the other hand, substitute methods such as molecular mechanics (MM) that can handle systems with a higher number of atoms are incapable of demonstrating bond-breaking or formation processes. As a result, novel techniques that blend the rapidity of MM with the precision of QM have been developed, leading to computational procedures that can explain enzymatic systems. One such method is based on the density functional theory (DFT) alongside high computational power, which has led to increased interest in theoretical chemistry processes.
DFT computations are often done to back experimental biochemical and inorganic investigations, aid in data interpretation, and direct new experiments. This proposal seeks to conduct a thorough literature review on the use of the OleT enzyme in biofuel production, determine existing gaps, and propose suitable methods to fill these gaps by engineering the enzyme to improve its yield.
Project Objectives
The aim of this project is to engineer the OleT enzyme to improve its biofuel yield.
Proposal objectives include:
• It is conducting a detailed literature review to understand the mechanisms that lead to the formation of products and by-products.
• It is using computational methods to design and develop mutants that give a higher yield of biofuel products.
• We were discussing the expected personal, academic, industrial, societal, and economic benefits of this project.
Production of Biofuels
So far, four bacterial biosynthetic pathways that lead to the transformation of free fatty acids or fatty acid thioesters into bio-hydrocarbons have been classified. The first pathway is a cyanobacterial pathway comprising two enzymes: an acyl-acyl carrier protein (acyl-ACP) reductase and an aldehyde decarbonylase. These two enzymes change fatty acyl-ACPs to alkanes (Liu et al. 2014).
A second pathway is a group of three genes in Micrococcus luteus, which produces alkenes with internal double bonds via the head-to-head joining of two fatty acyl-coenzyme A (acyl-CoA) molecules (Chen et al. 2015). The third reaction path entails a distinctive P450 decarboxylase OleTJE, which was isolated from Jeotgalicoccus sp. ATCC 8456. This enzyme catalyses the direct decarboxylation of long-chain fatty acids to produce α-olefins in the presence of H2O2 (Liu et al., 2014). The last reaction involves a type I polyketide synthase isolated from Synechococcus sp. PCC 7002 (Lin et al., 2015).
This enzyme converts fatty acyl-ACPs into α-olefins through five successive steps. The one-step decarboxylation of fatty acids by OleTJE P450 is the simplest approach of all these methods. In addition, this method makes use of free fatty acids as substrates in the place of fatty acid thioesters, which is considered beneficial for metabolic engineering due to the abundance of free fatty acids (Faponle et al., 2016). Moreover, free fatty acids are amenable to manipulation in E. coli, which is one of the most advanced microbial cell industrial units (Wang & Zhu 2018). Hence, the P450 fatty acid decarboxylative mechanism holds immense potential for engineering into a biological α-alkene-producing scheme.
Cytochrome P450
Cytochrome P450 is a superfamily of heme-containing external monooxygenases (Faponle et al., 2016). The name of this group of enzymes stems from their classification as hemoproteins (they contain heme as a cofactor) and demonstration of distinct spectral characteristics. P denotes pigment, whereas the term 450 indicates the rare absorption peak of the reduced CO-bound compound at 450 nm.
Cytochrome P450s are crucial drug metabolising enzymes in the human body, primarily in phase 1 reactions (Faponle et al. 2016). They are also found in other living organisms, including eukaryotic animals, plants, fungi, and bacteria. This group of enzymes is very diverse, with over 20,000 distinct sequences identified as of January 2015. In humans, P450s are mainly found in the liver, where they catalyse the breakdown of xenobiotics (Ji et al. 2015).
However, they are also useful in the degradation of vitamin D, biosynthesis of hormones, metabolism of alkaloids, steroids, eicosanoids, and fatty acids, the formation of bile acid, as well as the instigation of procarcinogens, which demonstrates their multiple roles in the body and their flexibility in the activation of substrates. In plants, cytochrome P450 facilitates the metabolism of herbicides and secondary metabolites (Zhai et al., 2014).
The substrate-binding compartment of P450 differs in shape and size in various isozymes. The functional group of P450 enzymes consists of an iron (IV)–oxo heme cation radical moiety known as Compound I (Faponle et al., 2016). However, experimental investigation of compound I am complicated by its fleeting nature, which limits the availability of details about its catalytic machinery. A widespread reaction mechanism implemented by the P450 enzymes is aliphatic hydroxylation of substrates, which has been demonstrated empirically as a stepwise progression that forms alcohol product compounds.
This mechanism has been validated by computational modelling, which revealed that the oxidation state comprises two spin states (lying close to each other) that produce two-state reactivity configurations, each having its own rate constant (de Visser et al. 2014).
The two-state reactivity prototype also forecasts that the lifespan of the radical intermediary could cause a split of the potential energy. Cytochromes P450 take part in catalytic reactions, for instance, dehalogenation, peroxidation, hydroxylation, sulfoxidation, S-dealkylation, desulfuration, and epoxidation, among many others (Zhao et al. 2014). The interaction of these enzymes with their corresponding electron donors is crucial because this specificity guarantees adequate catalysis and satisfactory reaction rates while shielding the system from bypass reactions.
The cytochrome P450 enzyme superfamily is among the most adaptable naturally occurring biocatalysts. They use oxygen to catalyse their reactions. Electron-moving proteins convey electrons to the enzyme to expedite oxygen instigation, whereas NADH or NADPH facilitate the hydroxylation of substrates. Most cytochrome P450 variants are bound by membranes and have different hydrophobic substrates on which they catalyse monooxygenation reactions.
Some of the substrates involved in these reactions include biological compounds such as steroids, fatty acids, and prostaglandins. On the other hand, extraneous compounds involved in monooxygenation reactions include organic solvents, ethanol, anaesthetics, and alkyl aryl hydrocarbon products. Monooxygenase reactions involve the use of one or more redox affiliate proteins to move two electrons from NADH or NADPH to the heme iron in the reactive centre for dioxygen initiation. Thereafter, one atom of oxygen is inserted into their substrates, which leads to the name P450 monooxygenases. A typical cytochrome P450 catalytic cycle is shown below in Figure 1.
However, a unique cytochrome P450 family, the CYP152 family, contains members that use H2O2 entirely as the only electron and oxygen donor. These members are known as P450 peroxygenases, and they include P450SPα, P450BSβ, and OleTJE (Matthews et al., 2017a). The peroxygenase activity of P450 enzymes is usually considered an advantage in practical settings because H2O2 is significantly cheaper than NADPH and redox proteins. These applications encompass the use of P450 enzymes as catalysts for biosynthetic reactions in living organisms and enzyme additives in laundry cleansing agents (de Visser 2009). Therefore, there is a renewed interest in exploiting the oxidative role of cytochromes P450 for industrial goals.
The cytochrome P450 catalytic cycle
Figure 1. The cytochrome P450 catalytic cycle (Belcher et al. 2014).
An important reaction mechanism in the chemistry of cytochrome P450 chemistry is the desaturation of aliphatic compounds to form olefins (Munro et al., 2018). Desaturation reactions are similar to aliphatic hydroxylation because both processes begin with the initial removal of a hydrogen atom (Ji et al., 2015). However, the reaction subsequently divides into two likely product routes: one presenting OH rebound to create alcohol products, and the other resulting in the removal of a second hydrogen atom to produce an olefin and water (Matthews et al. 2017a).
The first reaction pathway is referred to as aliphatic hydroxylation, whereas the second path is known as desaturation, which is of great interest in the production of biofuels. Even though comprehensive scientific investigations into the chemistry of P450 have been conducted for decades, many gaps exist in the understanding of its reaction mechanism.
Cytochrome P450 Peroxygenase OleTJE
Cytochromes P450 that use the peroxide shunt to catalyse the hydroxylation of long-chain fatty acids to produce alcohol is known as peroxygenases. Adding hydrogen peroxide or any other organic peroxide facilitates the production of compound 0 or I without requiring costly redox partners. The most important cytochrome P450 peroxygenases belong to the CYP152 family (Matthews et al. 2017b).
A notable peroxygenase is cytochrome P450 peroxygenase OleTJE, which was isolated from Jeotgalicoccus sp. ATCC 8456. This enzyme catalyses the extraordinary decarboxylation of long-chain fatty acids leading to the formation of α-alkenes. This process uses H2O2 as the sole electron and oxygen donor. Another reaction mechanism involves an enzymatic redox cascade using molecular oxygen and the recycling of NAPDH, as shown in Figure 2.
Oxidative decarboxylation of fatty acids with OleT. Path A uses H2O2 via the peroxide shunt, whereas path B uses O2 and NAD(P)H
Figure 2. Oxidative decarboxylation of fatty acids with OleT. Path A uses H2O2 via the peroxide shunt, whereas path B uses O2 and NAD(P)H (Dennig et al., 2015).
Nonetheless, it is not possible to manufacture cheap α-alkene biofuels on a large scale via the decarboxylase activity of OleTJE P450 while depending on the H2O2-dependent enzymatic system. This production is not feasible because using large quantities of peroxide is exorbitant. Additionally, high concentrations of H2O2 are detrimental to the wellbeing of biocatalysts, which become inactivated. As a result, the H2O2-independent action of OleTJE is ideal for the profitable microbial production of α-alkenes (Liu et al., 2014). Aliphatic α-alkenes are valuable because of their potential use as biofuel substitutes for fossil fuels, manufacture of lubricants, detergents, and polymers.
Therefore, studies on fatty acid decarboxylation by OleTJE are important and have the potential to result in the industrial manufacture of biogenic α-alkenes, which are renewable and environmentally friendly fuel sources.
H2O2-Independent Activity of OleTJE
H2O2-independent decarboxylation of fatty acids by OleTJE happens in vivo in the presence of NADPH and O2. Under these conditions, this enzyme efficiently decarboxylates long-chain fatty acids (with carbon atoms ranging from 12 to 20) to produce terminal olefins. This process requires OleTJE to work alongside a fused P450 reductase domain RhFRED from Rhodococcus sp., CamA, CamB, or a separate flavodoxin/flavodoxin reductase from E. coli.
The unearthing of the H2O2-free activity of OleTJE probes the monooxygenase-like machinery of the peroxygenase and guides future metabolic engineering efforts to enhance redox partners for efficient production of α-alkenes by OleTJE. However, the reaction liberates significant quantities of α- and β-hydroxylation by-products whose origin is not well understood. The independent peroxide activity of OleTJE is a promising approach to the production of biofuels because it safeguards the wellbeing of microbial systems.
α- and β-Hydroxylation by-products of OleTJE
QM/MM investigations on the bifurcation pathways have made it possible to identify and elucidate the generation of three by-products of OleTJE. These studies have also made it possible to demonstrate how the enzyme can be engineered for optimal desaturase activity. It is evident that the polarity and solvent availability of the substrate in the binding cleft weakens the OH-rebound path and allow a thermodynamically unfavourable decarboxylation reaction through kinetic means, as indicated in Figure 3.
Reaction products obtained from fatty-acid activation by P450 OleTJE
Figure 3. Reaction products obtained from fatty-acid activation by P450 OleTJE (Faponle et al., 2016).
Substrate Specificity of OleTJE Systems in Peroxide-Independent Pathways
The substrate preference of various OleTJE systems provides valuable data to enhance the comprehension of the in vivo machinery of fatty acid decarboxylases. This information directs the metabolic engineering of fatty acid biosynthesis. Various straight-chain saturated fatty acids with even numbers of carbon atoms in the hydrocarbon chain have been used to determine the substrate specificity of different OleTJE systems. Liu et al. (2014) used long-chain fatty acids ranging from C8 to C20 to determine the substrate specificity of OleTJE and OleTJE-RhFRED.
The percentage conversion of substrate into the matching α-alkene product was used to ascertain the substrate preference. It was noted that OleTJE had the highest preference for myristic acid (C14) for olefin production with a conversion ratio of 97%. The conversion of lauric acid (C12) and palmitic acid (C16) was below the ideal levels. OleTJE could only transform a small quantity of stearic acid (C18) and arachidic acid (C20) to alkenes (Figure 4). However, it could not catalyse the decarboxylation of capric acid (C10) or caprylic acid (C8).
Conversely, it was shown that OleTJE-RhFRED had a higher preference for fatty acids with relatively short chain lengths. This observation implied that the P450-reductase contact might produce a small structural change in the active site of OleTJE. The highest activity for OleTJE-RhFRED was shown against lauric acid at a conversion rate of 83.8%. However, no activity was observed for arachidic, capric, and caprylic acids (Li et al., 2014). Another notable observation was that the system consisting of OleTJE-RhFRED, NADPH, and O2 exhibited a lower activity than OleTJE and H2O2 when testing all fatty acids except lauric acid. This phenomenon points towards the likelihood of OleTJE evolving over time to enhance its peroxygenase activity.
The substrate preference spectrum for OleTJE (A) and OleTJE-RhFRED (A)
Figure 4. The substrate preference spectrum for OleTJE (A) and OleTJE-RhFRED (A) (Liu et al., 2014).
On the other hand, Dennig et al. (2015) demonstrated the enzymatic oxidative decarboxylation of short-chain fatty acids ranging from C4 to C9 using OleT, O2 as the oxidant, and NAD(P)H as the electron donor to generate the equivalent terminal C3 to C8 alkenes. Product titres of up to 0.93 gL-1 and total turnover numbers exceeding 2000 were given off. Attaining these yields was made possible by developing a well-organized electron-transfer series using putidaredoxin CamAB alongside the utilisation of NAD(P)H without regard for formate, glucose, or phosphite (Dennig et al. 2015).
As much as CamAB could oxidize NAD(P)H without an appropriate electron acceptor, no H2O2 was detected following full oxidation of 1 mm NADH, which pointed towards a direct electron displacement from CamAB to OleT. This observation also eliminated the possibility of inactivation by hydrogen peroxide. For this reason, the nonexistence of catalase did not affect the throughput of the system. Surprisingly, fatty acids with short chains (less than 9 carbon atoms from C4 to C11) were decarboxylated, thereby leading to the production of 0.04 to 2.45 mm of terminal alkenes. The total yield was influenced by the prevailing temperature and the chain length of the substrate. The highest activity was recorded for a C18 fatty acid at room temperature, whereas C12 yielded the best outcome at 48oC (3.26 mm of 1-undecane).
On the whole, fatty acids ranging from C10 to C16 had high conversion rates at 48oC. This system enabled the production of propene from butyric acid in only one step, which refuted the former hypothesis that OleT was specific to long-chain fatty acids. It was assumed that long-chain fatty acids associated with the substrate channel, while smaller substrates were held in the binding compartment as evident in correlated enzymes, for example, CYPBSß.
The poor action of fatty acids with intermediate chain lengths (C10 and C11) was attributed to poor binding on the enzyme. Decarboxylation was the principal reaction, with approximately 86 to 99% selectivity in fatty acids C18 to C22. The formation of α- and β-hydroxylated products was influenced by chain length and reaction temperature. There was 62% α-/β-hydroxylation when fatty acids with 9 carbon atoms reacted at 48oC.
The observed harmony between CamAB and OleT was attributed to similarities in the coordination of the adjacent cysteine thiolate moieties in the ferrous states of OleT and P450Cam, which permit the attachment of oxygen to the heme iron. Most P450 enzymes of bacterial origin depend on class I electron-transfer schemes (Dennig et al. 2015), which accounts for the enhanced catalytic performance of OleT together with CamAB, as opposed to systems illustrated in the past, such as OleT–RhFRed and Fdr/Fdx. These findings indicate that metabolic engineering efforts directed at the peroxide independent activity of OleTJE should consider pairing this enzyme with CamAB to take advantage of different lengths of fatty acid substrates.
Product Distribution in OleTJE Catalysis
The branching and regioselectivity inclinations of enzymes are sometimes affected by environmental effects. Therefore, Faponle et al. (2016) used QM/MM to study the full OleTJE enzyme and investigate its bifurcation pathway and product distribution. The main objectives were to identify the source of P450 OleTJE product formation and predict the features that separate the decarboxylation from hydroxylation pathways. There were no significant changes in geometry, spin state collation, and comparative energies. Geometry studies showed that there were no substantial disparities between hydrogen-atom extraction impediments.
In all instances, the hydrogen atom was nearly equidistant from the donor and acceptor atoms, notwithstanding the type of the QM region studied. It was evident that hydrogen atom removal impediments by Cpd I from the Cα and Cβ loci of the substrate were close in energy on the doublet or quartet spin states. All four impediments fell within a range of 3.2 (BS1) or 4.0 (BS2) kilocalories per mole for Sn400 (Faponle et al., 2016).
The observed hydrogen atom extraction impediments corroborate empirical findings of a combination of reaction outcomes from Cα and Cβ hydrogen atom removal. The slight energy difference between the two pathways implies that the hydrogen atom removal reactions between Cα and Cβ are competitive. However, the ratio of the Cα and Cβ pathways as depicted by the comparative energies of the transitional state moieties 4,2TSHA,α and 4,2TSHA,b were strongly influenced by environmental factors, for example, the associations between hydrogen bonds (Faponle et al. 2016).
This experiment showed that hydrogen atom extraction was the rate-limiting stage in the reaction mechanism. Therefore, the authors approximated a large kinetic isotope effect and computed an estimate for the substitution of hydrogen atoms by deuterium (molecular mass of 11). Geometry shots were also taken to determine possible hydrogen atom removal from the Cγ locus of the substrate (Sn400). It was noted that the reaction energy exceeded the reactant complex by more than 30 kcal mol-1 hence this mechanism was overlooked (Faponle et al., 2016).
The system slowed down following the removal of hydrogen atoms to form a radical intermediate (IH,α or IH,β) with an iron(IV)–hydroxo group and a reactive substrate on either the Cα or Cβ locus of the substrate. The transitional species 4,2IH,α only reacted through radical rebound past the rebound transition state 4,2TSreb,α thereby leading to the formation of α-hydroxo arachidonic acid upshots (4,2 P OH,α).
The rebound barriers recorded during this investigation were larger and higher in energy than is the norm for P450 hydroxylation reactions. Under normal circumstances, the low-spin pathway does not have any impediments. A decarboxylation reaction cannot take place following the removal of Cα hydrogen; it requires the additional removal of a hydrogen atom from Cβ to Cα, which requires high energy.
The competitive pathways leading to decarboxylation and hydroxylation products (PD,β and POH,β, correspondingly) via transition states TSD,β and TSreb,β came from the radical intermediate 4,2IH,β, as estimated by snapshots Sn300 and Sn400. The high-spin pathway through 4 IH,β gave rebound, and decarboxylation hindrances that were almost similar in energy hence could be regarded as competitive.
On the other hand, the low-spin path produced quality decarboxylation products from 2TSH,β directly without stable radical intermediates. These findings indicated that the rebound barriers were bigger than conventionally found in ideal complexes, which was attributed to numerous hydrogen-bonding connections to the iron–hydroxo group. These restrictions increased the obstacles in the rotation and rebound pathways. In addition, the hydroxylation process was dominant following Cα-H removal, while decarboxylation was favoured following the extraction of the Cβ-H atom on the doublet spin state.
On the other hand, the surface of the quartet spin state was expected to liberate a mixture of products. This observation implies that the reaction should be modified to promote Cβ-H extraction and impede Cα-H removal to promote decarboxylation. The elevated rebound barrier was linked to hydrogen-bonding interactions because of the water channel in P450 OleTJE in addition to the polarity of the nearby carboxylate-Arg245 salt bridge (Faponle et al. 2016). The importance of water channels in P450 OleTJE was also demonstrated.
Bifurcation Pathways of OleTJE
Faponle et al. (2016) used a valence-bond (VB) diagram to describe the bifurcation pathways of P450 OleTJE (Figure 5). In this illustration, the wave function of the reactant (YI) joins an energised state in the geometry of the yields, which is Yreb* for the alcohol upshots and YD* for the outcomes of decarboxylation. In the same way, the wave function of the product joins the energised excited state in the reactant geometry (4,2IH,β).
The magnitude of the reaction impediment depends on the excitation energy required to transform the reactant to the product in the reactant geometry. Consequently, the bifurcation machinery is determined by the comparative magnitude of the excitation energies Greb or Gdecarb (Fapone et al., 2016). Therefore, the microelectronic disparities between the wave functions in the ground and excited states influence the barrier impediments and rate constants for the rebound and decarboxylation reactions.
An assessment of the VB structures in the geometry of 4,2IH,β with emphasis on the ground and excited states provides an understanding of the strictures that control the bifurcation variables.
For OH rebound and the development of hydroxylation outcomes, the πyz/π*yz set of orbitals was divided into two atomic orbitals, 3d yz,Fe and 2py,O. This process expended an estimated Eπ/π*yz in energy from the system, which was estimated at 37.2 kcal mol-1. The 2py,O orbital coupled with ΦSub, a substrate radical, to create the C-O bond orbital (σC-O) in a process that consumed a bond severance energy BDECO of 87.2 kcal mol-1.
On the other hand, since the 2py,O had a double occupancy in 4,2IH,β, it was imperative to upgrade one electron to a free low-lying orbital. The two available orbitals were 3dxz in the doublet spin state and the σ*z2 orbital in the foursome spin state, Eexc. This excitation energy was approximated at 75.3 kcal mol-1. Thus the value of Greb was predicted as 25.3 kcal mol-1. As a rule, the height of the barricade is usually 33.33% of the promotion gap, which implied that a rebound barrier of approximately 8 kcal mol-1 was applicable.
A valence-bond illustration of the electronic attributes of decarboxylation versus OH-rebound reactions. The lines depict chemical bonds, whereas the dots show valence electrons
Figure 5. A valence-bond illustration of the electronic attributes of decarboxylation versus OH-rebound reactions. The lines depict chemical bonds, whereas the dots show valence electrons (Faponle et al., 2016).
However, a different scenario was witnessed in the decarboxylation reaction, where two processes occurred simultaneously. The C-Cα bond of the substrate disintegrated (BDECC) while the leaving CO2 group was oxidised to CO2. This second process denoted the electron affinity of CO2 (EACO2). As a result, the electron duo in the C-Cα orbital (σCCα) was split, after which the electron on Cα coupled up with the reactive group on Cβ to create a π bond (π CαCβ) possessing energy Eπ.
The additional electron was taken up by the iron–hydroxo compound, which was reduced by an amount of energy that corresponded to its electron affinity (EAFeOH). The electron affinity of the iron complex was estimated at 56.5 kcal mol-1, whereas the strength of the C-C bond in the substrate, BDECC, was found to be 7.8 kcal mol-1 (Faponle et al., 2016). Conversely, the singlet to triplet energy gap was used to compute the energy of the π bond, which yielded a value of 73.7 kcal mol-1. A promotion gap for decarboxylation (Gdecarb) of 3.4 kcal mol-1 was computed using the standard electron affinity of CO2 (‑0.60 eV).
The findings pointed towards a minor reaction barrier that was estimated to be less than 1 kcal mol-1. Figure 6 indicates the calculated bifurcation impediments of different radical intermediates.
Computed bifurcation barriers of various radical intermediates
Figure 6. Computed bifurcation barriers of various radical intermediates (Faponle et al., 2016).
Factors Affecting Decarboxylation Reactions of OleTJE
Before designing an enzyme to overproduce a given product, it is important to determine the factors that promote the reaction pathway that leads to the desired product. Investigations show that the decarboxylation pathway of OleTJE is mainly influenced by substrate factors (Xu et al., 2017). These aspects include the intensity of the C-Cα bond, the energy required to create the π bond between Cα-Cβ, and the electron affinity of CO2 (Faponle et al. 2016; Fang et al. 2017).
The impact of the oxidant on the decarboxylation reaction only happens via the electron affinity of the iron (IV)–hydroxo moiety. Consequently, enhancing the regioselectivity of decarboxylation as opposed to hydroxylation needs to concentrate on weakening the hydroxylation path by engineering the substrate-binding compartment or using alternate substrates. The computations presented by Faponle et al. (2016) show that the substrate-binding compartment of P450 OleTJE is highly polar and has numerous hydrogen-bonding interactions, which are crucial to the disruption of the radical rebound reactions that lead to the formation of α- and β-hydroxo fatty acids.
In addition, electron-extracting moieties attached to the Cα or Cβ locus would elevate the energy required to form the C-O bond, which would increase the radical rebound barriers. In contrast, the decarboxylation reactions would be boosted further by using substrates with an alkene bond between Cγ and Cδ. Such substrates would promote the π conjugation with the Cα-Cβ π bond, thereby leading to increased Eπ energy and stabilisation of the decarboxylation process.
This literature review has provided useful insights that will guide the engineering of OleTJE to enhance its productivity of biofuels. The following insights have revealed that the enhancement of olefin product distributions will require site-directed mutations that impede α- and β-hydroxylation as well as altering the substrate-binding compartment to limit hydroxylation rebound reactions (Li et al. 2015). It is also evident that the choice of substrate, reaction temperature, and OleTJE systems or combination of systems (Dennig et al. 2015) is critical in obtaining optimal terminal olefin yields from fatty acids. These pointers will inform the computational methods in the following section.
Computational modelling of biosystems provides researchers with a basis to establish their experiments because it eliminates redundant variables that would otherwise be a waste of resources. Quantum mechanics/molecular mechanics (QM/MM) techniques have become popular over the years due to their capacity to provide data regarding the starting point of products or by-products through calculated reaction machinery and energy settings (de Visser et al. 2014).
As a result, it is possible to collect data regarding factors that influence enzyme reactions, rates, and specificities by analysing the different states of the molecules involved in the reactions. On the other hand, cluster replicas such as the Density Functional Theory (DFT) enable voluminous computations in a short span by analysing precise features of a single protein (de Visser 2009; Melander et al. 2016).
Even though cluster models shed light on the intrinsic activity of enzymes’ active sites, problems result from key bonding patterns and lengths. This section of the proposal provides a detailed procedure using general QM/MM optimization protocols to promote the decarboxylation mechanisms of cytochrome P450 OleTJE in the production of terminal olefins. The proposed elements of improvement, as highlighted in the literature review, will be incorporated into the procedure to determine the most effective manipulations that yield favourable outcomes.
QM/MM Methodology
QM/MM methods start by finding heavy-atom coordinates of the protein (enzyme) of interest from X-ray crystallographers in the literature or the protein databank library. These coordinates will be obtained in a pdb format, which implies that the structure will require editing and additional modifications prior to running simulations. Examples of relevant modifications include the addition of hydrogen atoms, missing chains, amino acid residues, and substrates. Crystal additive buffers should also be removed. The structure of the active site can also be altered, after which the system can be solvated.
These procedures permit the stabilization of the system to promote the proper folding of proteins. A molecular dynamics (MD) simulation is then executed to produce snapshots that are necessary to implement the QM/MM computations, which split up the biochemical system into inner and outer cores. The computations of the inner core are usually done using ab initio or DFT techniques that have high levels of accuracy (Lewars 2016). On the other hand, outer core calculations are effected using MM approaches. These procedures have been authenticated with experimental data many times by Quesne et al. (2016).
Selection of Starting Structure in pdb Format
It is imperative to observe high levels of accuracy and caution when choosing the starting crystal structure from the protein databank. At the same time, it is difficult to find the ideal structure directly from the protein databank, which implies that modifications are inevitable. However, it is necessary to find the most comprehensive structure to reduce the time required for modifications. Overall, pdb files indicate whether atoms and residues are absent.
Thus it is the user’s responsibility to ascertain whether the missing attributes are protein side-chains, loop areas, substrates, binding proteins, co-factors, or reducing partners and select the file with the closest semblance to the preferred structure. It is also necessary to select high-resolution pdb files (minimum resolution of 2 Å) to ensure accurate identification of atoms and their loci in the protein. In addition, the investigator will prefer files with Rfree values of 0.3 or lower to ensure that reliable structures are chosen.
Modifying the Starting Structure
The most informative data on enzymatic reactions are obtained from QM/MM computations of fast reactions. However, the pace of these reactions and their corresponding short-lived intermediates complicates finding their crystal coordinates. Therefore, a previous structure in a resting state will be taken and modified appropriately. Starting structure modifications that will be relevant in this process will be informed by the findings and recommendations of previous studies.
The modifications will involve substrate modification and alteration of the enzyme’s active site. Faponle et al. (2016) proposed that enhancing the regioselectivity of decarboxylation as opposed to hydroxylation reactions should focus on weakening the hydroxylation path by engineering the substrate-binding compartment or using alternate substrates. The computations presented by Faponle et al. (2016) show that the substrate-binding compartment of P450 OleTJE is highly polar and has numerous hydrogen-bonding interactions, which are crucial to the disruption of the radical rebound reactions leading to the formation of α- and β-hydroxo fatty acids.
Therefore, it is hypothesized that a further increase in the polarity of P450 OleTJE’s active site will impede the reactions leading to hydroxylation products. Another potential modification is attaching electron-extracting moieties to the Cα or Cβ locus to elevate the energy required to form the C-O bond and increase the radical rebound barriers.
Substrate Docking Protocols
The binding of a substrate to an enzyme triggers a conformational modification in the three-dimensional organization of the resultant enzyme-substrate complex. However, the pace of this process is too fast to permit the crystallisation of the analogous structure. Molecular and computational docking procedures have been established to forecast the orientations and loci of binding substrates.
An online server, SwissDock, which is available at www.swissdock.ch, will be used for this step. Docking procedures take advantage of basic data regarding the substrate and protein surface, including the types and configurations of atoms, atomic alterations, and polarity. These data are compared to find the most suitable lock-and-key pairs. The enzyme and substrates will be tested in each likely orientation, followed by the computation of energies for each alignment using MM methods. The probable enzyme-substrate complex will be identified by selecting the arrangement with the least energy. Software packages such as Autodock will be useful in this regard (Quesne et al., 2016).
The Addition of Hydrogen Atoms and Protein Groups to the Structure
Hydrogen atoms will be added to the protein structure in the pdb file following the execution of geometrical optimizations. A number of online programs and packages will be beneficial in this process to enhance the detectability of the right positions of hydrogen atoms. The addition of hydrogen atoms is determined by the oxidation states, pKa systems, and hybridisation of the atoms.
Selection of Molecular Modelling Replicas
The right field force needs to be used to minimize the energy of bonds before conducting QM/MMM computations. Energy minimisation promotes the attainment of correct bond lengths and hybridisation structures. Several field forces are available. However, since this project involves an enzyme, a biochemical field force such as GROMOS, CHARMM, or AMBER will be ideal for the process. The force field calculations will make use of the following equations.
The first equation shows that MM energy (EMM) is the sum of bonded and non-bonded interactions. In the above equations, d denotes the bond length, ϴ shows the angles of valence bonding, and x stands for torsional dihedral angles. The dihedral attribute is a cosine function that takes into account the torsional multiplicity (n) and phase (δ) with respect to a bonding constant (kx). The non-bonding energy between two atoms A and B is spread between two interactions: a Lennard-Jones expression and a Coulomb interaction, as shown in the third equation.
The Lennard-Jones expression includes the rAB term that specifies the interatomic distance, whereas ϵAB and σAB describe the depth, breadth, and location of the dormant well. On the other hand, the (qAqB) term represents the effective atomic charge of the two atoms A and B, whereas ϵ0 describes permittivity in a vacuum. The value of qAqB is directly proportional to the inverse distance between the two atoms and ϵ0.
Solvation of the Protein Structure
Crystallised protein structures in pdb format lack some of the water molecules that should ordinarily be available. This lack arises from the fact that most crystallisation approaches require the elimination of water from protein suspensions (Jindal & Warshel 2016). Consequently, it will be necessary to add water to the initial starting structure to re-solvate the starting structure to simulate a lifelike enzyme model at room temperature. Re-solvation also improves the in vivo and in vitro portrayal of polarity. The major shortcoming of available resolving methods is that it is difficult to get water molecules to the inner regions of the protein. In addition, water molecules have the tendency to leave pits in the boundary of the protein, which interferes with the calculations (Jindal & Warshel 2016).
Therefore, it is necessary to re-solvate the system several times until no more water molecules can be added to the protein structure. Molecular, dynamic equilibration methods will be used to transfer water molecules into the pits. One method requires conducting a long MD simulation while exerting pressure from the perimeters of the system to compel the conveyance of water into the centre of the protein. The loci of the added water molecules will be checked following solvation to ensure that no hydrogen bonds have formed using atoms required in the reaction pathway.
Heating, Equilibrating and Conducting Molecular Dynamics Simulation on the Solvated Structure
Heating a chemical system is necessary to elevate its temperature, equilibrate the complex, and conduct MD simulations. The temperature of the system will be raised gradually at small intervals ranging from 0oK to 298oK. This process uses the applied force field to minimise the hydrogen atoms and solvent molecules. When the desired temperatures are achieved, the protein backbone becomes flexible, which equilibrates the system. At this point, MD simulations will be run for 500 to 1000 picoseconds to generate snapshots of the protein structure, reaction intermediates, and products for the required QM/MM computations. Different temperatures will be investigated to determine the impact of temperature on the decarboxylation mechanism of OleTJE.
Selection of Snapshots from the MD Simulation
It will be necessary to select a number of snapshots from the MD simulation because each snapshot will provide different calculations with respect to the reaction rate, barrier height, and product formation. In addition, changes in the patterns of hydrogen bonds within a single water molecule are likely to alter the total energy by approximately 4 kcal mol-1 (Hayashi et al., 2017). Therefore, several snapshots will permit the evaluation of varying conformations of the protein.
A number of MD snapshots will be taken from various points alongside the MD path graph. Each recreation is considered a starting point for the development of reaction profiles. Additionally, snapshots from different protein milieus will have varying parameters such as local minima of comparative energies, spin-state gaps, and order. Therefore, the average of the findings will be obtained when evaluating reaction schemes from more than one snapshot.
Selection of the QM Region, QM Methods, and Basis Set
The energetics of a reaction path and the electronic configuration of the active enzyme is influenced by the choice and size of the QM region. A small QM region can lead to incorrect descriptions of the polarisation of the system. Conversely, extremely large QM regions can lead to excessive polarisation. These outcomes are attributed to disparities in bond lengths when comparing QM regions of varying sizes. Therefore, it will be necessary to scrutinise numerous QM regions with diverse sizes to ascertain consistencies and select the appropriate regions.
Electrostatic and Mechanical Embedding Methods
When computing energies in the QM region, it will be necessary to consider the charges in the MM zone. Consequently, electrostatic or mechanical-embedding systems will be helpful. The mechanical-embedding procedures determine van der Waals forces and electrostatic influences of the MM atoms in the QM zone (Jindal & Warshel 2016). Geometric strictures such as the expanse of van der Waals interactions will be needed to separate the QM and MM energies. The QM/MM expressions will subsequently be incorporated directly into the force field estimations.
Calculation of QM/MM Energies
QM/MM energies will be calculated by completing geometrical optimisations on the QM and MM zones. Two methods of optimisation are possible: adiabatic and diabatic (Duarte et al., 2015). The adiabatic geometry optimisation approach entails relaxing the macro element exchanges inside the QM core as well as the micro-component interactions brought about by the neighbouring MM environment. In contrast, the diabatic style encompasses freezing the QM core region during the optimisation of the MM environment and vice versa. However, the total of the QM and MM energies do not account for the entire QM/MM energy of a system because linker atoms intermingle in the boundary of the two regions, thus producing energies that should be considered in the computations. These coupling terms are described by the following equation.
QM/MM computations can be achieved by assimilating an MM into a QM software package, for example, the ONIOM program, which is part of the Gaussian software package. These calculations can be executed using additive and subtractive procedures, as shown in the following equations.
The subtractive approach, which will be used in this project, involves using linker atoms (L) to conceal the inner subsystems. The EQM (I + L) term symbolises the sum of the QM region (I) and linker atoms (L) at the QM level. Conversely, the EMM (I + L) term denotes a similar calculation at the MM level. However, it is important to subtract the MM energy of the inner subsystem from the energy of the entire system (E whole MM) to preclude double tallying of energies. The main benefit of the subtractive method is that it does not need a QM/MM joining term (Duarte et al., 2015).
Computational resources for this project, for example, software such as Autodock, CHARMM, DL_POLY, and TURBOMOLE, and hardware required to implement the computations, are already licensed by the University of Manchester, thus no additional costs will be incurred. Workspace is also available at the School of Computer Science at the University of Manchester. The primary researcher is a master’s student at the University of Manchester. Therefore, he will not require any compensation for the services offered. The primary supervisor will be remunerated by the University of Manchester.
Figure 7 provides a rough guide to schedule until completion of the project, which is expected to run from 1 June 2018 to 20 September 2018. However, this schedule may change due to unanticipated circumstances, and thus the chart will be used as a guideline.
A Gantt chart of the proposed scheduling of the entire project.
Figure 7. A Gantt chart of the proposed scheduling of the entire project.
This project is expected to have a vast impact, including academic, personal, industrial, societal, and economic outcomes. The completion of this project will enhance the personal knowledge of the student and contribute to their research and career growth. This project will provide an opportunity for the student to learn and appreciate computational modelling. The findings of the project will be published in peer-reviewed journals, thus permitting the student to disseminate the findings of their work to other researchers in the field.
This work will shed more light on the versatility of the cytochrome P450 enzyme, which may inform future studies to exploit this group of enzymes in the production of other useful substances. This feat will bring personal satisfaction, a personal sense of achievement and contribution to the common good of society. The successful conclusion of this venture will provide academic benefits by enabling the master’s student to develop a detailed, high-quality dissertation. Consequently, the student will be able to fulfil the academic requirements of the University of Manchester for the award of a master’s degree.
Industries can benefit from this work by switching to cost-effective and sustainable energy sources for their processes. The findings of this study can be implemented on a large scale and commercialized to produce large quantities of biofuel. Therefore, new industries specializing in the production of biofuels can emerge.
The combustion of petroleum from fossil fuels has adverse environmental outcomes. This process emits significant amounts of carbon monoxide gas, which pollutes the environment, adds to greenhouse gases, and leads to global warming, among other adverse consequences. The productive implementation of this project will lead to the development of an environmentally friendly source of fuel and reduce the levels of pollution associated with the combustion of petroleum fuels.
This move will result in societal benefits by reducing the burden of air, soil, and water pollution. Pollution has negative effects on human and animal health, including respiratory complications (air pollution) and gastrointestinal problems (water and soil pollution). Biofuel use will eliminate these health concerns and lead to healthier populations. However, as much as minimising the emission of greenhouse gases is among the key motivations for the development of biofuels, other unanticipated effects in terms of soil attributes, biodiversity, and water quality may arise. Therefore, it may be necessary to conduct thorough assessments concerning the environmental impact of terminal alkene production by OleTJE.
Economic benefits include the ability to generate large quantities of biofuel. Previous methods of generating terminal olefins using NAD(P)H and hydrogen peroxide are not economically viable due to the prohibitive cost of reducing power and large-scale use of hydrogen peroxide. The successful completion of this project will enable the engineering of OleTJE to produce large quantities of biofuel through a peroxide-independent pathway, which will eliminate the need for hydrogen peroxide. Instead, molecular oxygen, which is readily available, will be used as the oxidant. Current methods of producing biofuel are still more expensive than the cost of producing fossil fuel.
Engineering OleTJE to overproduce biofuels using cost-effective methods will make the production of biofuels sustainable and affordable, which will encourage people to use them for their energy needs. Additionally, eliminating the undesired α- and β-hydroxylation products will produce large quantities of terminal olefins for use as biofuel. In this way, concerns about the escalating costs of crude oil and the increasing consumption of fuel will be addressed by the cost-efficient production of biofuels. Countries with little or no fossil fuel reserves will be able to enjoy energy security through the adoption of this technology.
The advancement of biofuels provides opportunities and challenges for developing countries. Non-oil producing nations may obtain partial substitutes for expensive oil imports. This venture may also provide an extra source of income and contribute towards rural development and enhancement of local infrastructures. Other possible economic benefits include the emergence of biofuel industries, which will lead to economic growth through job creation and the sale of raw materials (feedstock) and products. However, negative effects, such as the depletion of water resources, food insecurity, and pollution, may also occur. Therefore, detailed cost-benefit analyses are necessary before embarking on large-scale biofuel production ventures.
Belcher, J, McLean, KJ, Matthews, S, Woodward, LS, Fisher, K, Rigby, SE, Nelson, DR, Potts, D, Baynham, MT, Parker, DA & Leys, D 2014, ‘Structure and biochemical properties of the alkene producing cytochrome P450 OleTJE (CYP152L1) from the Jeotgalicoccus sp. 8456 bacterium’, Journal of Biological Chemistry, vol. 289, no. 10, pp. 6535-6550.
Chen, B, Lee, DY & Chang, MW 2015, ‘Combinatorial metabolic engineering of Saccharomyces cerevisiae for terminal alkene production’, Metabolic Engineering, vol. 31, pp. 53-61.
Covert, T, Greenstone, M & Knittel, CR 2016, ‘Will we ever stop using fossil fuels?’ Journal of Economic Perspectives, vol. 30, no. 1, pp. 117-138.
de Visser, SP 2009, ‘Introduction to quantum behaviour – a primer,’ in RK Allemann & NS Scrutton (eds), Quantum tunnelling in enzyme-catalysed reactions, Royal Society of Chemistry, Cambridge, UK, pp. 18-35.
de Visser, SP, Quesne, MG, Martin, B, Comba, P & Ryde, U 2014, ‘Computational modelling of oxygenation processes in enzymes and biomimetic model complexes’, Chemical Communications, vol. 50, no. 3, pp. 262-282.
Dennig, A, Kuhn, M, Tassoti, S, Thiessenhusen, A, Gilch, S, Bülter, T, Haas, T, Hall, M & Faber, K 2015, ‘Oxidative decarboxylation of short‐chain fatty acids to 1‐alkenes’, Angewandte Chemie International Edition, vol. 54, no. 30, pp. 8819-8822.
Duarte, F, Amrein, BA, Blaha-Nelson, D & Kamerlin, SC 2015, ‘Recent advances in QM/MM free energy calculations using reference potentials’, Biochimica et Biophysica Acta (BBA)-General Subjects, vol. 1850, no. 5, pp. 954-965.
Fang, B, Xu, H, Liu, Y, Qi, F, Zhang, W, Chen, H, Wang, C, Wang, Y, Yang, W & Li, S 2017, ‘Mutagenesis and redox partners analysis of the P450 fatty acid decarboxylase OleT JE’, Scientific Reports, vol. 7, p. 44258.
Faponle, AS, Quesne, MG & de Visser, SP 2016, ‘Origin of the regioselective fatty‐acid hydroxylation versus decarboxylation by a cytochrome P450 Peroxygenase: what drives the reaction to biofuel production?’ Chemistry-A European Journal, vol. 22, no. 16, pp. 5478-5483.
Grant, JL, Hsieh, CH & Makris, TM 2015, ‘Decarboxylation of fatty acids to terminal alkenes by cytochrome P450 compound I’, Journal of the American Chemical Society, vol. 137, no. 15, pp. 4940-4943.
Hayashi, S, Uchida, Y, Hasegawa, T, Higashi, M, Kosugi, T & Kamiya, M 2017, ‘QM/MM geometry optimization on extensive free-energy surfaces for examination of enzymatic reactions and design of novel functional properties of proteins’, Annual Review of Physical Chemistry, vol. 68, pp. 135-154.
Ji, L, Faponle, AS, Quesne, MG, Sainna, MA, Zhang, J, Franke, A, Kumar, D, van Eldik, R, Liu, W & de Visser, SP 2015, ‘Drug metabolism by cytochrome P450 enzymes: what distinguishes the pathways leading to substrate hydroxylation over desaturation?’ Chemistry-A European Journal, vol. 21, no. 25, pp. 9083-9092.
Jindal, G & Warshel, A 2016, ‘Exploring the dependence of QM/MM calculations of enzyme catalysis on the size of the QM region’, The Journal of Physical Chemistry B, vol. 120, no. 37, pp. 9913-9921.
Lewars, EG 2016, Computational chemistry: introduction to the theory and applications of molecular and quantum mechanics, Springer, New York.
Liao, JC, Mi, L, Pontrelli, S & Luo, S 2016, ‘Fuelling the future: microbial engineering for the production of sustainable biofuels’, Nature Reviews Microbiology, vol. 14, no. 5, pp. 288-304.
Lin, FM, Marsh, ENG & Lin, XN 2015, ‘Recent progress in hydrocarbon biofuel synthesis: pathways and enzymes’, Chinese Chemical Letters, vol. 26, no. 4, pp. 431-434.
Liu, Y, Wang, C, Yan, J, Zhang, W, Guan, W, Lu, X & Li, S 2014, ‘Hydrogen peroxide-independent production of α-alkenes by OleT JE P450 fatty acid decarboxylase’, Biotechnology for Biofuels, vol. 7, no. 1, p. 28.
Matthews, S, Belcher, JD, Tee, KL, Girvan, HM, McLean, KJ, Rigby, SE, Levy, CW, Leys, D, Parker, DA, Blankley, RT & Munro, AW 2017a, ‘Catalytic determinants of alkene production by the cytochrome P450 peroxygenase OleTJE’, Journal of Biological Chemistry, vol. 292, no. 12, pp. 5128-5143.
Matthews, S, Tee, KL, Rattray, NJ, McLean, KJ, Leys, D, Parker, DA, Blankley, RT & Munro, AW 2017b, ‘Production of alkenes and novel secondary products by P450 OleTJE using novel H2O2‐generating fusion protein systems’, FEBS Letters, vol. 591, no. 5, pp. 737-750.
Melander, M, Jónsson, EO, Mortensen, JJ, Vegge, T & García Lastra, JM 2016, ‘Implementation of constrained DFT for computing charge transfer rates within the projector augmented wave method’, Journal of Chemical Theory and Computation, vol. 12, no. 11, pp. 5367-5378.
Munro, AW, McLean, KJ, Grant, JL & Makris, TM 2018, ‘Structure and function of the cytochrome P450 peroxygenase enzymes’, Biochemical Society Transactions, vol. 46, no. 1, pp. 183-196.
Nicoletti, G, Arcuri, N, Nicoletti, G & Bruno, R 2015, ‘A technical and environmental comparison between hydrogen and some fossil fuels’, Energy Conversion and Management, vol. 89, pp. 205-213.
Quesne, MG, Borowski, T & de Visser, SP 2016, ‘Quantum mechanics/molecular mechanics modelling of enzymatic processes: caveats and breakthroughs,’ Chemistry–A European Journal, vol. 22, no. 8, pp. 2562-2581.
Wang, J & Zhu, K 2018, ‘Microbial production of alka (e) ne biofuels’, Current Opinion in Biotechnology, vol. 50, pp. 11-18.
Xu, H, Ning, L, Yang, W, Fang, B, Wang, C, Wang, Y, Xu, J, Collin, S, Laeuffer, F, Fourage, L & Li, S 2017, ‘In vitro oxidative decarboxylation of free fatty acids to terminal alkenes by two new P450 peroxygenases’, Biotechnology for Biofuels, vol. 10, no. 1, p. 208.
Zargar, A, Bailey, CB, Haushalter, RW, Eiben, CB, Katz, L & Keasling, JD 2017, ‘Leveraging microbial biosynthetic pathways for the generation of ‘drop-in’ biofuels’, Current Opinion in Biotechnology, vol. 45, pp. 156-163.
Zhao, YJ, Cheng, QQ, Su, P, Chen, X, Wang, XJ, Gao, W, & Huang, LQ 2014, ‘Research progress relating to the role of cytochrome P450 in the biosynthesis of terpenoids in medicinal plants’, Applied Microbiology and Biotechnology, vol. 98, no. 6, pp. 2371-2383.
This proposal on Engineering OleT Enzyme for Better Biofuel Yield was written and submitted by your fellow student. You are free to use it for research and reference purposes in order to write your own paper; however, you must cite it accordingly.
Removal Request
Request the removal
Need a custom Proposal sample written from scratch by
professional specifically for you?
Writer online avatar
Writer online avatar
Writer online avatar
Writer online avatar
Writer online avatar
Writer online avatar
Writer online avatar
Writer online avatar
Writer online avatar
Writer online avatar
Writer online avatar
Writer online avatar
certified writers online
Cite This paper
Select a referencing style:
IvyPanda. (2021, May 18). Engineering OleT Enzyme for Better Biofuel Yield. Retrieved from https://ivypanda.com/essays/engineering-olet-enzyme-for-better-biofuel-yield/
Work Cited
"Engineering OleT Enzyme for Better Biofuel Yield." IvyPanda, 18 May 2021, ivypanda.com/essays/engineering-olet-enzyme-for-better-biofuel-yield/.
1. IvyPanda. "Engineering OleT Enzyme for Better Biofuel Yield." May 18, 2021. https://ivypanda.com/essays/engineering-olet-enzyme-for-better-biofuel-yield/.
IvyPanda. (2021) 'Engineering OleT Enzyme for Better Biofuel Yield'. 18 May.
More related papers
Psst... Stuck with your
assignment? 😱
Psst... Stuck with your assignment? 😱
Do you need an essay to be done?
What type of assignment 📝 do you need? |
3a606d3541e152a1 | Collective Neutrino Flavor Transformation In Supernovae
Huaiyu Duan George M. Fuller Department of Physics, University of California, San Diego, La Jolla, CA 92093-0319 Yong-Zhong Qian School of Physics and Astronomy, University of Minnesota, Minneapolis, MN 55455
May 20, 2021
We examine coherent active-active channel neutrino flavor evolution in environments where neutrino-neutrino forward scattering can engender large-scale collective flavor transformation. We introduce the concept of neutrino flavor isospin which treats neutrinos and antineutrinos on an equal footing, and which facilitates the analysis of neutrino systems in terms of the spin precession analogy. We point out a key quantity, the “total effective energy”, which is conserved in several important regimes. Using this concept, we analyze collective neutrino and antineutrino flavor oscillation in the “synchronized” mode and what we term the “bi-polar” mode. We thereby are able to explain why large collective flavor mixing can develop on short timescales even when vacuum mixing angles are small in, e.g., a dense gas of initially pure and with an inverted neutrino mass hierarchy (an example of bi-polar oscillation). In the context of the spin precession analogy, we find that the co-rotating frame provides insights into more general systems, where either the synchronized or bi-polar mode could arise. For example, we use the co-rotating frame to demonstrate how large flavor mixing in the bi-polar mode can occur in the presence of a large and dominant matter background. We use the adiabatic condition to derive a simple criterion for determining whether the synchronized or bi-polar mode will occur. Based on this criterion we predict that neutrinos and antineutrinos emitted from a proto-neutron star in a core-collapse supernova event can experience synchronized and bi-polar flavor transformations in sequence before conventional Mikhyev-Smirnov-Wolfenstein flavor evolution takes over. This certainly will affect the analyses of future supernova neutrino signals, and might affect the treatment of shock re-heating rates and nucleosynthesis depending on the depth at which collective transformation arises.
14.60.Pq, 97.60.Bw
I Introduction
In both the early universe and in core-collapse supernovae, neutrinos and antineutrinos can dominate energetics and can be instrumental in setting compositions (i.e., the neutron-to-proton ratio). However, the way these particles couple to matter in these environments frequently is flavor specific. Whenever there are differences in the number fluxes or energy distribution functions among the active neutrino species (, , , , and ), flavor mixing and conversion can be important Fuller et al. (1987, 1992); Sigl and Raffelt (1993); Qian et al. (1993); Qian and Fuller (1995a); Pastor et al. (2002); Schirato and Fuller (2002); Balantekin and Yüksel (2005).
In turn, the flavor conversion process becomes complicated and nonlinear in environments with large effective neutrino and/or antineutrino number densities Fuller et al. (1987); Pantaleone (1992); Sigl and Raffelt (1993); Samuel (1993); Qian and Fuller (1995a, b). In these circumstances neutrino-neutrino forward scattering can become an important determinant of the way in which neutrinos and antineutrinos oscillate among flavor states.
Two of the three vacuum mixing angles for the active neutrinos are now measured. The third angle () is constrained by experiments and is limited to values such that (see, e.g., Ref. Fogli et al. (2006) for a review). In addition, the differences of the squares of the neutrino mass eigenvalues are now measured, though the absolute masses and, therefore, the neutrino mass hierarchy remains unknown.
Both the solar and atmospheric neutrino mass-squared differences are small, so small in fact that conventional matter-driven Mikhyev-Smirnov-Wolfenstein (MSW) evolution Wolfenstein (1978, 1979); Mikheyev and Smirnov (1985) would suggest that neutrino and/or antineutrino flavor conversion occurs only far out in the supernova envelope. On the other hand, it has been shown that plausible conditions of neutrino flux in both the early shock re-heating epoch and the later neutrino-driven wind, -process epoch, could provide the necessary condition for neutrino-neutrino forward scattering induced large-scale flavor conversion deep in the supernova environment Fuller and Qian (2006).
The treatment of the flavor evolution of supernova neutrinos remains a complicated problem, and the exact solution to this problem may only be revealed by full self-consistent numerical simulations. However, physical insights still can be gained by studying somewhat simplified models of the realistic environments. For example, one source of complication is that there are three active flavors of neutrinos in play. As the measured vacuum mass-squared difference for atmospheric neutrino oscillations () is much larger than that for solar neutrino oscillations (), the general problem of three-neutrino mixing in many cases may be reduced to two separate cases of two-neutrino mixing, each involving () and some linear combination of and ( and ). This reduction allows the possibility of visualizing the neutrino flavor transformation as the rotation of a “polarization vector” in a three dimensional flavor space Mikheyev and Smirnov (1986). Different notations have been developed around this concept (see, e.g., Refs. Sigl and Raffelt (1993); Kostelecky and Samuel (1994)). However, none of these notations fully exhibits the symmetry of particles and anti-particles in the SU(2) group that governs the flavor transformation.
The equations of motion (e.o.m.) of a neutrino “polarization vector” is similar to those of a magnetic spin precessing around magnetic fields. One naturally expects that some collective behaviors may exist in dense neutrino gases just as for magnetic spins in crystals. Indeed, it was observed in numerical simulations that neutrinos with different energies in a dense gas act as if they have the same vacuum oscillation frequency Samuel (1993). This collective behavior was later explained by drawing analogy to atomic spin-orbit coupling in external fields and termed “synchronization” Pastor et al. (2002).
Another, more puzzling, type of collective flavor transformation, the “bi-polar” mode, has been observed in the numerical simulations of a dense gas of initially pure and Kostelecky and Samuel (1993). This type of collective flavor transformation usually occurs on timescales much shorter than those of vacuum oscillations. Although the analytical solutions to some simple examples of “bi-polar” systems have been found Kostelecky and Samuel (1995); Samuel (1996), many aspects of these bi-polar systems still remain to be understood. In particular, it seems counter-intuitive that, even for a small mixing angle, large flavor mixing occurs in both the neutrino and antineutrino sectors in a dense gas initially consisting of pure and for an inverted mass hierarchy.
Both synchronized and bi-polar flavor transformation were discovered in the numerical simulations aimed at the early universe environment. It has been shown that synchronized oscillation can also occur in the supernova environment Pastor and Raffelt (2002). However, it is not clear if supernova neutrinos can also have bi-polar flavor transformation. If supernova neutrinos can have collective synchronized and/or bi-polar oscillations, the questions are then where these collective oscillations would occur and how neutrino energy spectra would be modified.
In this paper we try to answer the above questions. In Sec. II we will give the general equations governing the mixing of two neutrino flavors in the frequently used forms and introduce the notation of neutrino flavor isospin, which treats neutrinos and antineutrinos on an equal footing. We will also point out a key quantity, the “total effective energy”, in analogy to the total energy of magnetic spin systems, which is conserved in some interesting cases. In Sec. III and Sec. IV we will analyze the synchronized and bi-polar neutrino systems using the same framework in each case. We will first describe and explain the main features of these collective modes using the concept of total effective energy. We then generalize these analyses by employing “co-rotating frames”. We will derive the criteria for the occurrence of these collective modes, and discuss the effects of an ordinary matter background. In Sec. V we will outline the regions in supernovae where the neutrino mixing is dominated by the synchronized, bi-polar and conventional MSW flavor transformations. We will also describe the typical neutrino mixing scenarios expected with different matter density profiles. In Sec. VI we will summarize our new findings and give our conclusions.
Ii General Equations Governing Neutrino Flavor Transformation
We consider the mixing of two neutrino flavor eigenstates, say and , which are linear combinations of the vacuum mass eigenstates and with eigenvalues and , respectively:
where is the vacuum mixing angle. We take and refer to as the normal mass hierarchy and as the inverted mass hierarchy. When a neutrino with energy propagates in matter, the evolution of its wavefunction in the flavor basis
is governed by a Schrödinger-like equation
where and are the amplitudes for the neutrino to be in and at time , respectively. (This equation is “Schrödinger-like” because, unlike the Schrödinger equation, we are here concerned with flavor evolution at fixed energy and with relativistic leptons.) The vacuum mass contribution to the propagation Hamiltonian in the flavor basis is
and the contribution due to forward scattering on electrons in the same basis is
where with being the net electron number density. Eq. (3) also applies to the antineutrino wavefunction
if in is replaced by .
When a large number of neutrinos and antineutrinos propagate through the same region of matter, their forward scattering on each other makes another contribution to the propagation Hamiltonian for each particle. For the th neutrino, this contribution is Fuller et al. (1987); Notzöld and Raffelt (1988); Pantaleone (1992); Sigl and Raffelt (1993); Qian and Fuller (1995a)
In the above equations, is the angle between the propagation directions of the th neutrino and the th neutrino or antineutrino, and () and () are the number density and single-particle flavor-basis density matrix of the th neutrino (antineutrino), respectively. Specifically,
where we have adopted the convention for the density matrix of an antineutrino in Ref. Sigl and Raffelt (1993). The neutrino-neutrino forward scattering contribution for an antineutrino can be obtained by making the substitution and in .
The single-particle density matrices in Eqs. (9a) and (9b) can be written in the form
where is the polarization vector in the three-dimensional (Euclidean) flavor space and represents the Pauli matrices. Explicitly, the polarization vectors in column form are
By straightforward algebra, it can be shown that the Schrödinger-like equation
and a similar equation for an antineutrino lead to Sigl and Raffelt (1993)
The three real components of the polarization vector contain the same information as the two complex amplitudes of the wavefunction except for an overall phase which is irrelevant for flavor transformation. Therefore, Eqs. (14a) and (14b) are equivalent to the Schrödinger-like equations. Eqs. (14a) and (14b) appear to suggest a geometric picture of precessing polarization vectors. This picture has been discussed quite extensively in the literature (see, e.g., Kim et al. (1987, 1988); Pastor et al. (2002)) and shown to be especially helpful in understanding flavor transformation when neutrino self-interaction (i.e., neutrino-neutrino forward scattering) is important. To facilitate the use of this picture, we briefly discuss the physics behind it and introduce some notations.
For simplicity, we first consider only the contributions and to the propagation Hamiltonian for a neutrino. In this case, we can write
with and being the unit vectors in the - and -directions in the flavor basis, respectively. Eq. (15) takes the form of the interaction between the “magnetic moment” of a spin- particle and an external “magnetic field” with
Here is the “gyromagnetic ratio” and can be chosen arbitrarily. Classically, the spin would experience a torque and its e.o.m. would be given by the angular momentum theorem:
By Ehrenfest’s theorem, the quantum mechanical description of a system has the same form as the classical e.o.m. provided that all physical observables are replaced by the expectation values of their quantum mechanical operators. In the present case, if we replace in Eq. (20) by
then neutrino flavor transformation governed by can be described quantum mechanically by
which is the same as Eq. (14a) in the absence of neutrino self-interaction. Clearly, the operator in Eqs. (15) and (21) represents a fictitious spin in the neutrino flavor space, which may be appropriately called the neutrino flavor isospin (NFIS). The flavor eigenstates and correspond to the up and down eigenstates, respectively, of the -component of . We will loosely refer to the expectation value of this operator as the NFIS and use it instead of the polarization vector to describe neutrino flavor transformation. The -component of a NFIS is of special importance as it determines the probability for the corresponding neutrino to be in :
Therefore, for a neutrino, , and 0 correspond to , and a maximally mixed state, respectively.
Adiabatic MSW flavor conversion has a simple explanation in this “magnetic spin” analogy. For illustrative purposes we assume and . As a propagates from a region with large matter density, e.g., the core of the sun, to a region of very little ordinary matter, changes its direction from to . If the density of electrons changes only slowly along the way (adiabatic process), also changes slowly, and the NFIS corresponding to the neutrino is always anti-aligned with . Therefore, the neutrino originally in the eigenstate () is now mostly in the eigenstate ().
It is useful to illustrate the criterion for adiabadicity of this process in the “magnetic spin” analogy. First, we note that the probabilities for a neutrino to be in instantaneous mass eigenstates (light) and (heavy) are
respectively, where is the angle between the directions of and , and are the unit vectors for the instantaneous mass basis. In the MSW picture, is the instantaneous matter mixing angle. In an adiabatic process, and are constant, and so is . Using Eq. (22) we have
On a timescale , has rotated by at least one cycle around . If changes its direction only by a small angle during , then in Eq. (25) averages to . Noting that , one can see that the angle is unchanged in this process. Therefore, the criterion for a MSW flavor transformation to be adiabatic is
which is equivalent to saying that the rate of change of the direction of the “magnetic field” is much smaller than the rotating rate of the“magnetic spin” around .
The full version of Eq. (14a) can be obtained by extending the Hamiltonian in Eq. (4) to include
We define the NFIS for an antineutrino as111The two fundamental representations and of the SU(2) group generated by the Pauli matrices are equivalent. These representations are related to each other by the transformation , and transforms in exactly the same way as does under rotation. Defining , one naturally obtains the minus sign in Eq. (29).
so that the terms related to neutrinos and antineutrinos appear symmetrically in . The probability for an antineutrino to be in is determined from
For an antineutrino, , and 0 correspond to , and a maximally mixed state, respectively.
Now Eqs. (14a) and (14b) can be rewritten in terms of the NFIS’s in a more compact way
with the understanding that
and that the sum runs over both neutrinos and antineutrinos. We also define a total effective energy (density) for a system of neutrinos and antineutrinos that interact with a matter background as well as among themselves through forward scattering:
We note that this effective energy should not be confused with the physical energies of neutrinos and antineutrinos. It can be shown from Eq. (31) that is constant if and all the ’s and ’s are also constant. The concept of the total effective energy will prove useful in understanding collective flavor transformation in a dense neutrino gas.
In the early universe the neutrino gas is isotropic, and
For illustrative purposes we will assume this isotropy condition in most of what follows. We will discuss the effects of the anisotropic supernova neutrino distributions in Sec. V.
Iii Synchronized Flavor Transformation
In a dense neutrino gas NFIS’s are coupled to each other through self-interaction and may exhibit collective behaviors. As discovered in the numerical simulations of Ref. Samuel (1993), neutrinos with different energies in a dense gas act as if they are oscillating with the same frequency. This collective behavior was referred to as “synchronized” flavor oscillations in the literature and explained in Ref. Pastor et al. (2002) by drawing analogy to atomic spin-orbit coupling in external magnetic fields. In this section we will first review the characteristics of a simple synchronized NFIS system from the perspective of the conservation of the total energy of the NFIS system. We will then extend the discussion to more general synchronized NFIS systems using the concept of a “co-rotating frame” and demonstrate the criteria for a NFIS system to be in the synchronized mode. We will show that the stability of a synchronized system is secured by the conservation of the total effective energy. In the last part of the section we will look into the problem of synchronized flavor transformation in the presence of ordinary matter, which is relevant for the supernova environment.
iii.1 A Simple Example of Synchronized Flavor Transformation
We start with a simple case of a uniform and isotropic neutrino gas with no matter background (). The gas initially consists of pure neutrinos with a finite energy range corresponding to , and all the ’s stay constant. The e.o.m. of a single NFIS is
is the total NFIS (density) of the gas. (The NFIS density for an individual “spin” is just .) Summing Eq. (36) over all neutrinos, we obtain
Following the discussion at the end of the preceding section, the evolution of the individual () and the total () NFIS obeys conservation of the total effective energy
An interesting limit is
Noting that each has a magnitude of 1/2 and has a magnitude of unity, we see that
in the above limit. Therefore, a gas with a large initial total NFIS evolves in such a way that it roughly maintains the magnitude of its . For such a gas, Eq. (36) reduces to
which means that each precesses around the total NFIS with a fixed common (angular) frequency
Eq. (38) shows that evolves on a timescale . Consequently, over a period satisfying
averages out to be and Eq. (38) effectively becomes
It can be shown from Eqs. (38), (41), and (42) that Therefore, precesses around with a fixed frequency while the individual ’s precess around with a fixed common frequency . This collective behavior of a dense neutrino gas is usually referred to as synchronized flavor oscillations Pastor et al. (2002).
iii.2 General Synchronized Systems
Synchronization can occur not only in dense neutrino gases but also in dense antineutrino gases and gases including both neutrinos and antineutrinos. Noting that the NFIS’s for neutrinos and antineutrinos essentially only differ by the signs in ’s [see Eq. (32)], one can repeat the same arguments in Sec. III.1 for these more generalized cases. Instead of doing so, we want to proceed from a new perspective, which demonstrates some of the benefits of the NFIS notation.
We consider a reference frame rotating with an angular velocity of . In this co-rotating frame, Eqs. (36) and (38) take the form
where () and () are () and its time derivative in terms of their -, -, and -components in the co-rotating frame, and
It is clear that one can set of a NFIS to any value by choosing an appropriate co-rotating frame, and a NFIS for an antineutrino in the lab frame becomes a neutrino in some co-rotating frame. For example, the NFIS in the lab frame with and corresponds to a with energy . In a co-rotating frame with the NFIS has and , which corresponds to a with energy . Therefore, the NFIS notation really treats neutrinos and antineutrinos on an equal footing.
Because and are the same vector in two different frames, the synchronization of the NFIS’s in one frame means the synchronization in any frame. Consequently, synchronization can occur in dense antineutrino gases and gases of both neutrinos and antineutrinos just as it can occur in pure neutrino gases as long as Eq. (40) is satisfied in some co-rotating frame.
As we have seen, is not uniquely determined and can have different values in different co-rotating frames. However, we note that the relative spread of the individual values of the ’s of the NFIS’s is an intrinsic property of a NFIS system and is co-rotating frame invariant. For a co-rotating frame with
one has
measures the spread of the ’s in the NFIS system. Synchronization can be obtained if
When applying this condition to astrophysical environments such as the early universe and supernovae, we must consider the meaning of as neutrinos in these environments formally have an infinite energy range. One interesting scenario is where the distribution of NFIS density as a function of has a single dominant peak. An example is the neutronization burst in a core-collapse supernova event where the neutrinos emitted are dominantly with a Fermi-Dirac-like energy distribution . For this case a natural estimate of is the half-width of the distribution function , where is obtained from using the relation .
Another interesting scenario is where the distribution of NFIS density as a function of has two dominant peaks. An example of this scenario is the Kelvin-Helmholtz cooling phase of a proto-neutron star in a core-collapse supernova event where the neutrinos emitted are mostly (in number) and . For this scenario one can take
where and are the peak energies of the and energy spectra, respectively.
For more complicated scenarios, the criterion to obtain synchronized flavor oscillations can be compared to the criterion for an adiabatic MSW flavor conversion. If a NFIS system has been tested to be in a synchronized mode using the analyses in Sec. III.1 in some co-rotating frame, each individual NFIS should precess around the total NFIS with a fixed angle. This is the same “tracking” behavior as in the adiabatic MSW flavor transformation process discussed in Sec. II except that now takes the place of in Eq. (26). Because slowly rotates around with angular frequency , the adiabatic criterion yields
where is the angle between the directions of and . Eq. (54) provides a necessary condition for synchronization. Practically one may use
as the criterion for synchronization, where is evaluated using Eq. (46) with all the relevant neutrino and antineutrino energy distributions.
We now make some comments on the stability of the synchronized mode. Because neutrinos with different energies have different vacuum oscillation frequencies, one may think that the NFIS’s will develop relative phases and that the resulting destructive interference will break the synchronization, i.e., reducing to approximately 0. Indeed, using Eq. (38) one can see that
is generally not zero, and therefore, varies with time. However, Eq. (36), from which Eq. (38) is derived, can be used to show that the total effective energy is conserved and the total NFIS roughly maintains constant magnitude if the ’s do not vary with time. [see Eq. (41)]. In this case, destructive interference stemming from the relative phases of different NFIS’s cannot completely destroy synchronized flavor oscillations. On the other hand, if initially, no significant synchronization of NFIS’s can occur spontaneously. This result is in accord with the lengthy study in Ref. Pantaleone (1998).
iii.3 Synchronized Flavor Transformation with a Matter Background
We now discuss the effects of a matter background on synchronized flavor transformation in dense gases of neutrinos and/or antineutrinos. The relevant evolution equations are
First we assume a fixed matter background with net electron number density . For high , corresponding to , Eqs. (57a) and (57b) reduce to
The above equations correspond to perfectly synchronized flavor oscillations: in a frame rotating with an angular velocity of , the total NFIS stays fixed and the individual NFIS’s precess around it with a common frequency . However, for neutrinos and antineutrinos initially in pure flavor eigenstates, and start out aligned or anti-aligned with . Therefore, the above perfect synchronized flavor oscillations reduce to a trivial case where all ’s remain in the initial state (i.e., all neutrinos stay in their initial flavor states). This trivial case is of no interest to us and will not be discussed further.
For and , the discussion is similar to the case with no matter background. All ’s precess around with a frequency and Eq. (57b) becomes
Therefore, the total NFIS of the gas precesses around the effective field and behaves just as does a single NFIS with and in the same matter background [see Eq. (22)]. For the cases with and or with and , this representative NFIS corresponds to a neutrino with energy
For the other cases, this representative NFIS corresponds to an antineutrino with energy . For an initially pure neutrino gas, is simply the neutrino energy distribution-averaged value of :
where is the energy distribution of . For more general cases, is evaluated using Eqs. (46) and (60) with all the relevant neutrino and antineutrino energy distributions.
The above discussion can be extended to the case of a slowly varying matter background in a straightforward manner. We note that this is again an adiabatic process as discussed in Sec. II except that takes the place of this time. The angle between and is therefore constant. A gas of initially dominantly with acts just like a single neutrino with energy propagating in this matter background. For a normal mass hierarchy (), there may be an MSW resonance that can enhance flavor transformation. In contrast, no MSW resonance exists and flavor transformation is suppressed by the matter effect for an inverted mass hierarchy ().
Obviously, for a neutrino and/or antineutrino gas with , there is no synchronized flavor transformation.
Iv Bi-Polar Flavor Transformation
The astrophysical environments where neutrino flavor transformation is of interest do not always provide conditions which are favorable for synchronization. For a neutrino gas to be in the synchronized mode, the neutrinos have to be prepared in such a way that the corresponding NFIS’s are strongly aligned in one direction. There are important regimes where this does not occur.
For example, consider the mixing channels and in the late-time, shocked region above the proto-neutron star. By definition, for a or and for a or . Therefore , , , and form two NFIS blocks pointing in opposite directions when they leave the neutrino sphere. The subsequent behavior of these neutrinos is interesting. We show below that under the right conditions large-scale collective “swapping” of flavors and can occur in a mode in which the NFIS blocks remain more or less oppositely-directed. This is an example of the bi-polar mode.
In Ref. Kostelecky and Samuel (1993), numerical simulations of a homogeneous, dense neutrino-antineutrino gas in the absence of a matter background showed that the flavor “swapping” in the bi-polar mode occurred at a higher frequency than would vacuum oscillations. Ref. Kostelecky and Samuel (1995) gave an analytical solution to a simple bi-polar system, a gas initially consisting of equal numbers of mono-energetic and , for a normal mass hierarchy. Ref. Samuel (1996) generalized the solution to a gas of unequal numbers of and with different energies, again for a normal mass hierarchy scenario, and found that the system exhibits bimodal features (dual frequencies).
In this section we again adopt a physical, analytical approach and use a simple example to illustrate neutrino flavor transformation in bi-polar systems from the energy conservation perspective. For the first time, we explain why large flavor mixing can develop in some bi-polar systems even with a small mixing angle. We will then extend the discussion to more general bi-polar systems using the co-rotating frame, and discuss how bimodal features can appear in such systems. We will propose criteria under which a NFIS system can be in the bi-polar mode, and show that a bi-polar system is at least semi-stable. We will conclude this section with some discussion on the effects of the matter background on bi-polar flavor transformation.
iv.1 A Simple Example of Bi-Polar Flavor Transformation
We start with a simple bi-polar system initially consisting of mono-energetic and with an equal number density , which form two NFIS blocks and . This system is uniform and isotropic and has no matter background. The evolution of and is governed by [see Eq. (31)]
where . With the definition of
we find
The initial conditions are
where and are the unit vectors in the - and -directions, respectively, in the vacuum mass basis (). Using these conditions and Eq. (64), we can show that
In other words, can only move parallel to while is confined to move in the plane defined by and .
The evolution of and obeys conservation of the total effective energy
which gives
with being the angle between and . Further, it can be shown from Eq. (63) that
Combining Eqs. (68) and (69), we obtain |
f24e24c959ec8d9e | JAMP Vol.6 No.12 , December 2018
Towards Understanding the Origins of Quantum Indeterminism and Non-Local Quantum Jumps
Abstract: By Invoking symmetry principle, we present a self-consistent interpretation of the existing quantum theory which explains why our world is fundamentally indeterministic and that why non-local quantum jumps occur. Symmetry principle dictates that the concept of probability is more fundamental than the notion of the wave function in that the former can be derived directly from symmetries rather than have to be assumed as an additional axiom. It is argued that the notion of quantum probability and that of the wavefunction are intimately connected.
1. Introduction
Quantum theory originated in the early 20th century with the inability of classical physics to account for the observed spectral distribution of the black body radiation and the new experimental data related to the electronic structure of atoms and molecules. Through the works of Max Planck, Albert Einstein and Arthur Compton light began to be conceived as consisting of discrete packets (quanta) of energy, now known as photons. In 1913 the mathematical model of Neils Bohr for the hydrogen atom came, subsequently improved by the work of Arnold Sommerfeld. The next revolutionary idea that came in 1924 was de Broglie’s hypothesis on the possibility of an electron and the like, having wave-like properties, and its subsequent verification by the seminal electron diffraction experiments by Davison and Germer in 1927. This provided impetus for the development of the elegant quantum mechanics. Influenced by Werner Heisenberg and many other giants, including Paul Dirac, Max Born, Erwin Schrödinger, Wolfgang Pauli and John von Neumann, to name only a few, the basic mathematical structure that we have today of quantum mechanics was formed. Max Jammer’s classic book [1] dwells at length on the chronological account of the development of quantum formalism. The theory so developed consists of a complete and logically consistent framework of mathematical deductions that could be applied to any quantum physical system and is beyond doubt one of the major advances in the history of science. Since its beginning, quantum theory has never been found to be contradicted by any microscopic phenomenon. The spectacular advances in physics, chemistry, biology, electronics―and essentially every other science―could not have occurred without the wonderful tools made possible by the deep knowledge of the micro-structure that quantum mechanics has offered us. With its elegant mathematical structure, firstly, quantum mechanics solved with immense success mysteries ranging from macroscopic superconductivity to the microscopic theory of elementary particles. Secondly, this compelled physicists to revise drastically the pre-existing ideas regarding reality and to reshape the concepts of cause and effects when dealing with matter at the microscopic level.
Despite such profound success, however, quantum mechanics has been notoriously confusing when it comes to its interpretation. Since the early development of quantum mechanics, the concept of measurement or the collapse of the quantum wave function has been at the root of the controversy that found concrete expression in the historical Bohr-Einstein debates [2] . There emerged many interpretations of quantum mechanics differing over which physical processes are to be considered measurements; can the measurement process be understood in deterministic terms; what actually causes collapse of the wave function; which elements of quantum theory can be called as real; what is the role of a conscious observer in the measurement process, and other matters. Among the prominent interpretations are, for instance, the hidden variable theories (an example of which is the de Broglie-Bohm pilot wave theory [3] [4] , the relative state formulation or the many worlds interpretation of quantum mechanics due to Haugh Everett [5] [6] , the consistent histories interpretation [7] , the ensemble or statistical interpretation [8] , etc. We do not, however, intend to evaluate these various views regarding the foundations of quantum mechanics; in fact, a huge amount of work have been devoted to these topics but the puzzles that quantum theory generated persist to this day [9] [10] . One naturally wonders how it is possible that such an extraordinarily successful theory may lead to such dubious issues and so many contending interpretations.
Lucien Hardy [11] showed elaborately that quantum theory can follow from five very reasonable axioms which, being related to the classical probability theory, might well have been posited prior to any empirical data which became available at the beginning of the 20th century leading ultimately to the foundations of quantum mechanics. One would however wonder to understand why a physical theory should be probabilistic at the fundamental level.
Symmetries have long been realized to play a fundamental role in quantum mechanics. The very basic algebraic structure and conservation laws in quantum mechanics follow from symmetry transformations on the wave function such as displacements and rotations through angles in space. But we argue that symmetries tell us more than what we think of them in the conventional account of the theory. In this paper, we wish to invoke symmetry rules to explain why there is indeterminism in our world in the first place. This in turn automatically explains why non-local quantum jumps are deeply rooted in the theory. Symmetry principle tells us that a quantum jump or collapse of the wave function is by its very nature instantaneous and that we cannot explain it in causal terms. Here we do not intend in the least to depart from the standard formulation of quantum mechanics but will try to make it clear that the role of symmetries in quantum mechanics is far more revealing than what is conventionally recognized. We will argue that in contrast to the traditional view the concept of probability is the prime ingredient of quantum mechanics that need not be imposed from outside as to give meaning to the wave function―it is rather the wave function which can be introduced into the picture as a mathematical tool to describe the intrinsic indeterminism resulting from symmetry.
We begin our discussion with a brief discussion of the measurement problem in Section 2. In Section 3, we present our view about the role of symmetries in understanding the emergence of indeterminism and non-locality. In Section 4, we outline the main conclusions which can be derived from this paper.
2. The Measurement Problem
Starting with John von Neumann [12] the orthodox school (also commonly known as the Copenhagen School of thought for historical reasons), led by Neil Bohr, holds that a wavefunction (the quantum state), represented by a state vector ψ in a Hilbert space, evolves with time according to the deterministic Schrödinger equation when the quantum stateis not observed, and is in general given by a coherent superposition of various experimental results, i.e., ψ = i c i ϕ i . Here ϕ j ’s represent the possible experimental results (eigenstates of an operator representing a dynamical variable that is being measured), and cj’s (in general complex) are the probability amplitudes for the various outcomes. The quantity | c j | 2 is postulated as representing the probability of finding the quantum system in the state ϕ j . Moreover, since an immediately repeated measurement yields the same result, we are to assume that the act of measurement collapses the wavefunction ψ, as we say; in an instant the quantum system jumps into a definite state ϕ j and all other possibilities simply cease to exist. For example, a particle’s existence is spread out over the entire space and appears only in a localized region of space when one measures its position―the probability of finding the particle at all other points drops to zero instantly. This instantaneous collapse (reduction) of the wavefunction, also referred to as non-locality1, proved a source of conceptual difficulties which has come to be known as the measurement problem. The difficulties come about in the interpretation of the mechanism by which a definite state is instantly singled out from amongst all possible outcomes. In this regard the orthodox school holds that the act of measurement is strictly outside the explanatory reach of quantum mechanics and that it requires a separate axiom of measurement in addition to the formalism of quantum mechanics.
3. Emergence of Indeterminism from Symmetry
In a celebrated correspondence with Clarke, a disciple of Newton, Leibniz denied the independent existence of space and time [13] [14] . Leibniz described space as a relational notion that existed only as a relation between objects that had no independent existence apart from those objects; motions existed only as relations between those objects. Leibniz critiqued Newton’s idea of the absolute space in which all points were exactly identical and relative to which all motions took place, by claiming that such a situation would have presented God with an impossible decision, i.e., where precisely to put the contents of the universe; why here rather than there? He argued that even God must have enough reason for all His acts; the impossibility of finding any such reason for any placement demonstrated that the notion of an absolute space could not be correct.
Yet one could offer a radically different description of the same kind of a problem which probably would have been far direr than what Leibniz might have thought, and which, in this author’s view, also contained the seeds for a non-deterministic mechanics. To proceed, we assume that a continuous (smoothed-out) Newtonian space pre-exists out there―this is what both classical physics and the contemporary quantum theory rest on. In order to make the argument more explicit, we restrict our attention to a simplistic example of a one-dimensional circular space (that could be of any size with no preferable point―symmetry). Now, one is asked to put a point particle on the circle somehow. The word “somehow” is used deliberately for it indicates the real trouble―as pointed out by Leibniz; since all the points in space are indistinguishable, we can by no causal reasoning explain the mechanism through which this placement would take place. Here, suddenly, all the rules of local determinism which we are familiar with no longer hold. For a point particle to reside in such a naive place, where there is no preferable location, we will need a drastic revision of our fundamental concepts―a new kind of physical laws must be formulated in place of the traditional laws of classical mechanics. The only plausible way to accommodate the current theoretical problem is to postulate the following utterly unusual rules which have no classical counterparts:
First, “since all points on the circle are equivalent, each point has to share with equal ‘weight’ the existence of the ‘whole’ particle”. A classical determinist would hardly be prepared to allow for this hypothesis because it abandons any notion that a particle could have a well-defined position. This fuzzy non-classical behavior that the particle exhibits, i.e., having existence at all points on the circle at the same time, is what one might call as potential existence―a term which in its essence is no different from Heisenberg’s “potentia” [15] . We may express it as probabilities of experimental outcomes and must therefore be distinguished from the mere existence in space as for classical objects. One should, however, recognize that there is a strong difference between the concept of (quantum) probability introduced here and the one what it is in classical physics.
The second, and probably more inexplicable, postulate is that, “when ‘projected’ into this space, the particle’s potential existence has to spread over the entire space instantly”. Note that this postulate, expressing non-locality of the theory, is certainly tied with the first one because otherwise the first postulate would be violated for the length of time, whatever, the particle takes to fill the available space with its potential existence. Remarkably, this instantaneous spread is the essence of discontinuous quantum jumps (i.e., the collapse phenomenon in which the particle picks up one observable value out of many) as will become even clearer when we formulate these postulates mathematically. Notable in all this approach is that the concept of probability and that of the discontinuous quantum jumps are not the concepts to be endowed arbitrarily into the theory (as is done in the conventional approach) but rather are necessitated by the symmetry rules and are mutually consistent. It is thus the symmetry rule that dictates us to adapt ourselves to the counter-intuitive notions as potential existence and non-local abrupt jumps, and we cannot make this weirdness go away by explaining how it works. It is therefore not wise to search for any causal explanation of the processes that are intrinsically probabilistic and non-local.
Let us now attempt to find out an appropriate mathematical formulation that would fit the description of this manifestly probabilistic realm. First of all note that the probability density on the circle should be constant and positive definite. Merely a positive definite constant | A | 2 (where may be a complex constant in general) would not do because the probability density should as well be expressed by a function of the angular variable φ and of any possible “uniform circular flow” of the probability density. Therefore, we choose a function | A f ( m φ ) | 2 to represent the constant and positive definite probability density. Now, in order for | A | 2 to represent the constant probability density the right choice (apart from an overall phase factor) for the function f ( m φ ) would be A exp [ i m φ ] . A must be chosen such that the total probability | A exp [ i m φ ] | 2 d φ is unity. So, it naturally turns out that a point particle on a circle can be represented by a complex stationary circular wave ψ ( φ ) = A exp [ i m φ ] . Demanding this function to be single-valued, m should be restricted to the values 0, ±1, ±2, ... One of these values will be picked up by the particle while it interacts instantly with the circular space. Furthermore, note that for a large radius this circular wave function should reduce to a plane wave2. Thus, the reasoning presented here naturally leads to the de Broglie’s hypothesis as well as Bohr’s discrete stationary states.
It is worth noting that the symmetry rules lead to the conclusion that the wave function and its probabilistic interpretation are intimately connected and, therefore, Born’s probability hypothesis [16] no longer seems to be an ad hoc one. Thus, once it is conceived that there is a wave function associated with a particle, the usual mathematical structure of the quantum theory can be recovered.
4. Conclusion
We noted that symmetries are at the root of the indeterminism and non-local quantum jumps at the microscopic level. Quantum probability seems to be the primary ingredient of theory. Furthermore, the notions of probability and the wave function are interwoven together within the theory and do not require any axiomatic hypothesis from outside the theory.
The authors are indebted to the University of Tabuk for its continuous support.
1That is, physical distances, and so the limitation of the speed of light, don’t seem to exist.
2It is worth noting that identifying with the angular momentum it is straight forward to show that for a large circle the circular wave reduces to a plane wave with a single value of linear momentum.
Cite this paper: Sadiq, M. and Alharbi, F. (2018) Towards Understanding the Origins of Quantum Indeterminism and Non-Local Quantum Jumps. Journal of Applied Mathematics and Physics, 6, 2468-2474. doi: 10.4236/jamp.2018.612208.
[1] Jammer, M. (1966) The Conceptual Development of Quantum Mechanics. McGraw Hill, New York.
[2] Whitaker, A. (1996) Einstein, Bohr and the Quantum Dilemma. University of Cambridge, Cambridge.
[3] Bohm, D.J. and Hiley, B.J. (1982) The de Broglie Pilot Wave Theory and the Further Development of New Insights Arising out of It. Foundations of Physics, 12, 1001-1006.
[4] Bell, J.S. (1992) Six Possible Worlds of Quantum Mechanics. Foundations of Physics, 22, 1201-1215.
[6] Barrett, J.A. (1999) The Quantum Mechanics of Minds and Worlds. Oxford University, Oxford.
[7] Griffiths, R.B. (1993) Consistent Interpretation of Quantum Mechanics Using Quantum Trajectories. Physical Review Letters, 70, Article ID: 2201.
[8] Ballentine, L.E. (1970) The Statistical Interpretation of Quantum Mechanics. Reviews of Modern Physics, 42, 358.
[9] Bell, J.S. (1987) Speakable and Unspeakable in Quantum Mechanics. Cambridge University, Cambridge.
[10] Cushing, J.T. (1994) Quantum Mechanics, Historical Contingency and the Copenhagen Hegemony. The University of Chicago, Chicago.
[11] Hardy, L. (2001) Quantum Theory from Five Reasonable Axioms. arXiv:quant-ph/0101012.
[12] Neumann Von, J. (1932) Mathematische Grundlagen der Quantenmechanik [Mathematical Foundations of Quantum Mechanics]. Springer-Verlog, Princeton University Press, Berlin.
[13] Alexander, H.G. (1956) The Leibniz-Clarke Correspondence. University of Manchester, Manchester.
[14] Barbour, J. (1982) Relational Concepts of Space and Time. The British Journal for the Philosophy of Science, 33, 251-274.
[15] Heisenberg, W. (1958) Physics and Philosophy, the Revolution in Modern Science Harper, New York.
[16] Born, M. (1926) Quantenmechanik der stoBvorgange. Zeitschrift für Physik, 38, 803-827. |
fda0093f70dc5894 | For an individual free particle that starts localized, the wave function packet spreads over time, so the particle becomes less localized. Suppose now that we have a gas of those particles inside a box, and we allow them to collide (using some potential): will the wave function of each particle still spread indefinitely, or do collisions act as a source of decoherence and the wave function relocalizes again? I have heard both arguments by different colleagues, even in textbooks. Has anybody made a computer simulation that shows what would be the best picture?
• $\begingroup$ I asked a similar question here but got no conclusive answers, so I think it must be pretty subtle. I hope you get a good canonical answer here. $\endgroup$ – knzhou Jul 3 '16 at 18:33
• $\begingroup$ It would be interesting if you cited some of the different arguments in textbooks that you've seen. $\endgroup$ – Rococo Jul 3 '16 at 20:18
• $\begingroup$ One good answer to this question would acknowledge the identical nature of particles, use second quantization, and compute the two point density correlation for a (ideal) gas. $\endgroup$ – DanielSank Jul 3 '16 at 21:58
• $\begingroup$ This is a highly non-trivial problem. I would suggest you started by solving the scattering problem for identical particles. I found a discussion of the problem in Landau, Quantum Mechanics, par. 137. $\endgroup$ – valerio Jul 4 '16 at 8:51
• 1
$\begingroup$ Related: en.wikipedia.org/wiki/Anderson_localization which is a system that does have localization. The question is if it applies in the context, at which point I have to say the question is too broad. It certainly still has my +1 though and could be confined e.g. by using @DanielSank's comment. $\endgroup$ – Wolpertinger Jul 4 '16 at 18:36
Preliminaries: How do we define 'localized?'
For a single particle, or for multiple non-entangled particles, it is easy to tell from the expressions for the wavefunctions whether they are localized or delocalized. For example, you might say that if the wavefunction is falling off exponentially or faster for large $x$, that is with a form like $\psi(x)\sim e^{-x/\xi}$ with some characteristic length scale $\xi$, then it is localized, while something like a plane wave (which can be considered to be in the $\xi \rightarrow \infty$ limit) is delocalized.
For interacting particles, the many-body quantum state will generically evolve to something that is entangled between the particle. Then there is no longer a wavefunction for an individual particle, and the question of localization is no longer so straightforward. For example, is the two-particle state $\psi(x_1,x_2)\sim e^{-(x_1-x_2)^2}$ localized or delocalized?
A standard way to generalize this idea of localized/extended to many-body systems is by using the concept of entanglement entropy (1), and asking if a particular region is entangled with another distant region of the system. For a one-dimensional system, the entanglement entropy is:
$S(\rho_A)=-\text{Tr}[\rho_A \log \rho_A]$, with $\rho_A$ the reduced density matrix for that system:
$$\rho_A(x_1,x_2,\ldots x_N,x'_1,x'_2,\ldots x'_N)=\int_{|x_i|>x_0} \int_{|x'_i|>x_0} ~\mathrm dx_1 \ldots \mathrm dx_N~\mathrm dx'_1 \ldots \mathrm dx'_N ~\psi(x_1,\ldots, x_N) \psi^*(x'_1,\ldots, x'_N)$$
Here we are looking at a region from $-x_0$ to $x_0$. If $S$ is exponentially small, then the system is localized, and if it is not it is extended. Notice that we've moved from talking about particles to talking about regions. This is more natural when thinking about localization, but for a uniform density of particles at a particular moment in time localization of one implies the other.
The sense of "localized" that we now have is that for a localized system, a measurement at one point does not perturb the quantum state at a faraway point. Using this standard, if you carry out the above calculations on a state like the above two-particle state, or a single particle plane wave state, you will find that they have non-zero entanglement entropy and are extended. However, a state like $\psi(x_1,x_2)\sim e^{-(x_1/\xi)^2}e^{-((x_2-2x_0)/\xi)^2}$ would be localized, as long as $x_0 \gg \xi$.
Eigenstate thermalization
Okay, with these ideas in place I can now state the answer simply: for a quantum system of particles in a box that interact with a hard-shell repulsion, in a highly excited state and dilute limit, and at equilibrium, the entanglement entropy is proportional to the volume of the system.
What this means, roughly speaking, is that every point in the box is equally entangled with every other point. In this sense, the system is extended. Measuring the quantum state at one point will also affect the quantum state at every other point.
The proof of this is basically due to Srednecki, in a foundational paper of quantum thermalization which I encourage you to take a look at (2). For the above system, Srednecki shows that the eigenstates of the system give particle behavior that agrees with Maxwell-Boltzmann statistical mechanics, and furthermore that systems that start far from equilibrium (such as the case you mention where everything starts out localized) will also evolve to an equilibrium state that obeys these predictions. Furthermore, subsequent work has emphasized that any system that has this self-equilibrating property, known as eigenstate thermalization, will also necessarily show volume scaling of entanglement entropy (see, for example, (3)).
All of what I've said so far has been about the pure quantum state, but people often talk about this kind of system in terms of decoherence. What's the connection?
Well, decoherence happens when the system of interest is entangled with many other inaccessible systems- and that's clearly what happens here (4, 5). Since any part of the system is entangled with every other one, for a system of even moderate size it would be practically impossible to observe the coherence between different parts. This means that the system will be functionally indistinguishable from a system with no coherence, or just a classical statistical ensemble. This is the miracle of entanglement- if you have enough of it, things get simpler instead of more complicated. That's why measurements, which invariably produce some complicated entangled state between the system and apparatus, can nonetheless result in gaining knowledge.
There are two valid ways one can describe the state of the box of colliding particles after a long time:
1. It is a complex many-body entangled state in which each part is equally entangled with every other part, but in such as way as to reproduce standard statistical results (such as the Maxwell-Boltzmann distribution) for a single-particle measurement.
2. Because of the high amount of entanglement, for all practical purposes the particles may also be treated as decohered classical particles, in which case they of course have a well-defined position and momentum.
Neither of these claims is incorrect, and each might be useful in the right context.
It really depends on the boundary conditions. For boundary conditions like a 3D box with reflecting walls, the initial quantum state $\Psi$ will stay a quantum state with the unique wave function depending on variables of each particle: $$\Psi({\bf{r}}_1,...,{\bf{r}}_n, t).$$ If the boundary conditions are such that allow exchange with the environment, then the density matrix approach may become more appropriate.
In both cases the positions of particles are statistically predicted, but in the latter case there may not be interference phenomenon (or it will be less pronounced).
Consider your case for a double slit experiment without and with particle position measurements, measurements before the screen.
To solve your problem exactly, you would have to solve the Schrödinger equation
$$i \frac{\partial}{\partial t} \Psi (\vec r_1 \dots \vec r_N,t)= H \ \Psi(\vec r_1 \dots \vec r_N,t)$$
where $\Psi (\vec r_1 \dots \vec r_N,t)$ is the wave function of the $N$ particles and
$$H=\sum_i^N \frac{p_i^2}{2 m} + \sum_{i<j}^N u_{ij}+V_{\text{ext}}$$
where $u_{ij}$ is some pair potential and $V_{\text{ext}}$ is the box potential. Of course, you also need an initial condition
$$\Psi (\vec r_1 \dots \vec r_N,0) = \tilde \Psi (\vec r_1 \dots \vec r_N)$$
The first thing that you must notice is that it is inappropriate to talk about "the wave function of each particle", because you have to consider the total wave function of the $N$ particles ($\Psi$). If the particles are indistinguishable, this function must posses some symmetries, depending on what kind of particles you are considering (bosons or fermions).
It is really difficult to solve such a problem analytically or numerically. A good starting point to get a qualitative idea would be to solve the corresponding equations for two particles and see what happens.
This has been done numerically by the authors of this article using a gaussian and square potentials, with distinguishable and indistinguishable particles. in the latter case, symmetrized (bosonic) and antisymmetrized (fermionic) wave functions were considered:
$$\Psi'(x_1,x_2) = \frac 1 {\sqrt{2}} [\Psi(x_1,x_2)\pm \Psi(x_2,x_1)]$$
The wave function at $t=0$ is assumed to be the product of two gaussian wave packets:
$$\Psi(x_1,x_2, t=0) = g(x_1,x_1^0,k_1,\sigma) \ g(x_2,x_2^0,k_2,\sigma)$$
$$g(x,x^0,k) = e^{i k x} e^{-\frac{(x-x^0)^2}{4 \sigma^2}}$$
and $k_2 = - k_1$.
In the following plot you can for example see the result of the collision of two distinguishable particles with equal mass $m$ interacting via a square potential (the curves are the probability densities obtained by the wave functions):
enter image description here
The numbers in the top left corners indicate the time (in units of $100 \times $ time step) and the edges of the frames correspond to the walls of the box.
You can see that there is indeed a "spreading" of the wave packets (to be more precise, thier modulus squared, that is, the probability densities) after the collision (cfr. 1 and 120).
Quoting from the article:
In Fig. 5 we show nine frames from the movie of a repulsive m–m collision in which the mean kinetic energy equals one quarter of the barrier height. The initial packets are seen to slow down as they approach each other, with their mutual repulsion narrowing and raising the packets up until the time (50) when they begin to bounce back. The wavepackets at still later times are seen to retain their shape, with a progressive broadening until they collide with the walls and break up.
In the context of solid-state physics, a closely related question has been an area of active research in the past few years. Most interacting systems do indeed thermalize (and thus delocalize) over long time scales. However, certain systems whose disorder is much stronger than their interactions experience "Many-Body Localization," in which the individual particles remain "stuck" indefinitely. This has many macroscopic consequences, like lack of electrical conduction (because the electrons aren't free to move) and entanglement entropy that only scales as an area law rather than a volume law for every eigenstate (not just the ground state). There are far too many papers on this topic to list, but the original paper that started it all is Basko, Aleiner, and Altschuler (2006).
A dense gas with relatively strong interactions (by gas standards) might roughly be thought of as a very strongly disordered, weakly interacting solid, so I suspect that just as in the solid-state case, your thought experiment might lead to either a delocalized or a localized state, depending on the details of the density of the gas and the strength of its interactions.
• $\begingroup$ I'm not convinced that a dense gas of a single species with strong interactions can be thought of as equivalent to static disorder. In one case the system has translational invariance, and in the other it does not. However, an interesting intermediate case is that in which there are two species of particles, one much heavier than the other, in which one might imagine that the heavy particles are "almost" static and look like disorder to the lighter ones. This is discussed a bit, for example, here: arxiv.org/abs/1606.05650 . $\endgroup$ – Rococo Jul 4 '16 at 18:19
• $\begingroup$ (but +1 for mentioning MBL as the other possibility for an interacting system) $\endgroup$ – Rococo Jul 4 '16 at 18:30
Quantum effects appear if the concentration of particles satisfies, $$\frac{N}{V} \ge n_q$$ where $N$ is the number of particles, $V$ is the volume, and $n_q$ is the quantum concentration, for which the interparticle distance is equal to the thermal de Broglie wavelength, so that the wavefunctions of the particles are barely overlapping. As the quantum concentration depends on temperature, high temperatures will put most systems in the classical limit unless they have a very high density e.g. a White dwarf or the very early Universe.
The quantum nature of the particle manifests itself in that bosons obey Bose–Einstein statistics and fermions obey the Fermi–Dirac statistics. Both Fermi–Dirac and Bose–Einstein statistics are well approximated by the classical Maxwell–Boltzmann statistics at high temperature and low concentration where the quantum effects are negligible.
I suppose you mean that the gas is contained in a magic box. Otherwise the walls become part of the system, exchanging momentum/energy with the 'particles'. I have no answer for you; I don't know. What I do know is that none of the particle-particle collisions can be characterized other than by using a probability distribution. Common sense demands that assuming you know a particles approximate position and momentum at time t=0, and assuming the distribution of other particles in the box is random and they are all at thermal equilibrium with the walls, the position or momentum will be more and more evenly distributed over the volume (phase space) at times t>0. So, you tell me, are you using the particle as the origin of the coordinates of said wave equation, or is the frame of reference fixed by the box? As far as I can see, your question isn't useful and lacks any predictive value. It seems to me your question is about the state of the wave equation in between observations. Which (maybe because I'm a fan of the Copenhagen Interpretation) seems profoundly misguided. You probably can get the particle localized, once (t=0) but then what? How do you observe it a 2nd time? Clearly that 2nd observation will not be at an equivalently localized place. (barring some trap).
The behaviour of the molecules in your gedanken experiment can be approached by using decoherence. But I do not believe you can get a definitive answer until somebody makes a full scale simulation (or until some expert's answer can make a formal proof of what really happens, but I am not skilled to do that). The decoherence effects can be argued heuristically, but taking that approach the answer I found is ambiguous.
On one hand, a single molecule is not isolated, it interacts with a macroscopic environment (gas, electromagnetic radiation, walls). In most cases this environment will act to reduce the entangled state of as single molecule and make it to apparently collapse into a preferred basis. The prefered basis is usually the position basis, but this is not always the case and depends on the details of the set up.
If the preferred basis is the position basis, then the wavefunction of individual molecules will not spread for too long, its interaction with the rest of the environment in which it is situated will make the probabilities to "collapse" to a smaller packet (as mentioned by P. Shor in the comment, it cannot be a pure eigenstate of the position operator). But if the preferred basis is different, for instance the momentum basis, then the end result will be a largely delocalized wave packet, This second option seems to be more consistent with the fact that once reaching thermodynamic equilibrium, the gas molecules satisfy the Maxwell Boltzmann statistics.
• 2
$\begingroup$ The wavefunction can never collapse entirely to the position basis, as this would mean the momentum would be unbounded. $\endgroup$ – Peter Shor Jul 3 '16 at 20:25
• $\begingroup$ If you are suggesting that molecular or container interactions would result in wave function collapse, I don't think that's true. $\endgroup$ – anon01 Jul 4 '16 at 18:48
• $\begingroup$ @ConfusinglyCuriousTheThird no, that is not what I mean, I will re write my answer to try to make it more clear $\endgroup$ – Wolphram jonny Jul 4 '16 at 18:51
Your Answer
|
aabfe172b0ecec00 | Search This Blog
Friday, August 17, 2012
Aether Action
It is really unfortunate over my forty year sojourn with science that mainstream science has not yet united charge and gravity forces. If you do not know what unification means, do not worry because there are explanations galore for the proposition of charge and gravity unification. Moreover, the limitations of mainstream science are obscured by the tensor algebra of relativity, the particle zoo of the Standard Model, and the mysteries of black holes, dark matter, and dark energy. This complexity renders mainstream science's explanations unintelligible to most people.
My life with science and technology has involved discovery of meaning and a deeper understanding of being. I enjoy very challenging problems in science and technology and have tended to work on problems that others cannot easily solve.
Thus it is quite a pleasure to discover the aethertime universe from which all physical laws and constants derive from a simple set of rational beliefs in discrete matter and action along with the Schrödinger equation. By augmenting continuous space and time with discrete matter and action, gravity and charge forces become scaled versions of each other and there are many other puzzles that discrete matter and action address. In fact, aethertime's particle-like Cartesian and wave-like relational representations for reality reveal the mystery of consciousness along with the vicissitudes and evolution of feeling and emotion.
To explain the inexplicable, discrete matter and time delay provide a rational universe based on a set of three mathematical axioms, axioms that show the mystery of consciousness as well as the purpose and meaning of existence. Aethertime shows that there is a kind of spirituality within a rational universe with the gifts of matter, time, and action as a basis for imagining desirable futures.
The aethertime universe has three primal beliefs as origin, destiny, and purpose, a trimal that discovers meaning and purpose for being. Every life and every universe has a beginning, has a destiny, and has a purpose in discovery and aethertime is a rationale for our universe that also has an origin, has a destiny, and has a purpose in discovery.
Humans and all life share and enjoy but a very thin slice of time and in fact all of human civilization is barely 5,000-10,000 human lifetimes, which is a bare one-hundred-thousandth of the lifetime of our universe. The primordial seed of all that we are is in discrete matter, time delay, and their action and we are therefore the progeny of the action of matter in time, even as we imagine our many possible futures. The universe, all life, and humanity would not be and we would not be without both the actions and the possibilities of matter that is our purpose in discovering how the universe works.
Religions believe in the supernatural, which seems like an otherwise harmless part of most other people’s lives. Religions have variously selected beliefs that are often associated with selective interpretations of ancient stories with mysterious supernatural origins that seem by definition irrational, but so what? People believe in a great many irrational things like extraterrestrial UFO's and conspiracies and yet people still survive and sometimes even thrive with many such irrational beliefs. Some people believe that they are beautiful and attractive in spite of evidence to the contrary in the mirror every morning.
After all is said and done, most of us can and still do agree to live by the golden rule and have compassion for others and limit our selfishness and adhere to the norms of civilization even without any supernatural stories to guide us. However, we also then agree to live by a code of justice enforcing those norms with punishment meted out to those who violate civilization's norms.
Certain elements of religion do show a potentially destructive religio-politico zealotry that often seems to violate civil norms, but really this behavior is not unique for any particular religious ideology or even for religion at all. Religious and political zealotry by their very natures have a potential for persecution, for war, for inquisition, for shunning, for excommunication, and for other religious and political retributions.
Religions believe in an afterlife that is free from all of the misery and selfishness of life, which can lead to self-destructive behavior. Leaving this life in favor of some imagined perfect afterlife can be the source of very destructive behavior, both for individuals as well as others whose lives those individuals touch.
We all have a purpose in discovering how the universe works, which can be as mundane as what is for lunch or as profound as the origin of all things. For me to imagine a desirable future, though, I need something much more rational and much better tied to a rational universe than any of these religious or political beliefs. After all, any of these beliefs, even Buddhism and capitalism, has its zealots.
So I now count myself as a believer of sorts, and I have come to believe in both science and in the metascience of discrete matter, matter exchange, and time delay. Aethertime is a simple set of rational beliefs that anchors existence. Although there will always be some mysteries and gaps in any science, thank goodness that science will always explain the explainable.
But then there will always be the inexplicable that science can never hope to explain, and as a result, we all also need the spiritual or supernatural stories for the inexplicable and the ineffable parts of existence. For the inexplicable, we all need primal beliefs; in an origin, in a destiny, and in a purpose--the trimal. That we need this trimal belief is self evident since there would be no conscious life without unfounded and unconditioned belief. We can choose to ignore the inexplicable, but that simply reduces our purpose to some default or innate belief. In fact, most people accept their primal beliefs from established supernatural agents, which have been providing such guidance from a diverse set of ancient stories for thousands of years.
Discrete matter and time delay are a framework for existence which make help me understand all of the extant beliefs of civilization, religious, political, and philosophical. Through the prism of aethertime, the wisdom of ancient stories comes alive and aethertime provides an understanding of human reason. |
9e9b61efd1d1a243 | Morse potential
The Morse potential, named after physicist Philip M. Morse, is a convenient interatomic interaction model for the potential energy of a diatomic molecule. It is a better approximation for the vibrational structure of the molecule than the QHO (quantum harmonic oscillator) because it explicitly includes the effects of bond breaking, such as the existence of unbound states. It also accounts for the anharmonicity of real bonds and the non-zero transition probability for overtone and combination bands. The Morse potential can also be used to model other interactions such as the interaction between an atom and a surface. Due to its simplicity (only three fitting parameters), it is not used in modern spectroscopy. However, its mathematical form inspired the MLR (Morse/Long-range) potential, which is the most popular potential energy function used for fitting spectroscopic data.
Potential energy function
The Morse potential energy function is of the form
Here is the distance between the atoms, is the equilibrium bond distance, is the well depth (defined relative to the dissociated atoms), and controls the 'width' of the potential (the smaller is, the larger the well). The dissociation energy of the bond can be calculated by subtracting the zero point energy from the depth of the well. The force constant (stiffness) of the bond can be found by Taylor expansion of around to the second derivative of the potential energy function, from which it can be shown that the parameter, , is
where is the force constant at the minimum of the well.
Since the zero of potential energy is arbitrary, the equation for the Morse potential can be rewritten any number of ways by adding or subtracting a constant value. When it is used to model the atom-surface interaction, the energy zero can be redefined so that the Morse potential becomes
which is usually written as
where is now the coordinate perpendicular to the surface. This form approaches zero at infinite and equals at its minimum, i.e. . It clearly shows that the Morse potential is the combination of a short-range repulsion term (the former) and a long-range attractive term (the latter), analogous to the Lennard-Jones potential.
Vibrational states and energies
Like the quantum harmonic oscillator, the energies and eigenstates of the Morse potential can be found using operator methods.[1] One approach involves applying the factorization method to the Hamiltonian.
To write the stationary states on the Morse potential, i.e. solutions and of the following Schrödinger equation:
it is convenient to introduce the new variables:
Then, the Schrödinger equation takes the simple form:
Its eigenvalues and eigenstates can be written as:[2]
with [x] denoting the largest integer smaller than x.
where and is a generalized Laguerre polynomial:
There also exists the following important analytical expression for matrix elements of the coordinate operator (here it is assumed that and ) [3]
The eigenenergies in the initial variables have form:
where is the vibrational quantum number, and has units of frequency, and is mathematically related to the particle mass, , and the Morse constants via
Whereas the energy spacing between vibrational levels in the quantum harmonic oscillator is constant at , the energy between adjacent levels decreases with increasing in the Morse oscillator. Mathematically, the spacing of Morse levels is
This trend matches the anharmonicity found in real molecules. However, this equation fails above some value of where is calculated to be zero or negative. Specifically,
integer part.
This failure is due to the finite number of bound levels in the Morse potential, and some maximum that remains bound. For energies above , all the possible energy levels are allowed and the equation for is no longer valid.
Below , is a good approximation for the true vibrational structure in non-rotating diatomic molecules. In fact, the real molecular spectra are generally fit to the form1
in which the constants and can be directly related to the parameters for the Morse potential.
As is clear from dimensional analysis, for historical reasons the last equation uses spectroscopic notation in which represents a wavenumber obeying , and not an angular frequency given by .
Morse/Long-range potential
An important extension of the Morse potential that made the Morse form very useful for modern spectroscopy is the MLR (Morse/Long-range) potential.[4] The MLR potential is used as a standard for representing spectroscopic and/or virial data of diatomic molecules by a potential energy curve. It has been used on N2,[5] Ca2,[6] KLi,[7] MgH,[8][9][10] several electronic states of Li2,[4][11][12][13][9][12] Cs2,[14][15] Sr2,[16] ArXe,[9][17] LiCa,[18] LiNa,[19] Br2,[20] Mg2,[21] HF,[22][23] HCl,[22][23] HBr,[22][23] HI,[22][23] MgD,[8] Be2,[24] BeH,[25] and NaH.[26] More sophisticated versions are used for polyatomic molecules.
See also
• 1 CRC Handbook of chemistry and physics, Ed David R. Lide, 87th ed, Section 9, SPECTROSCOPIC CONSTANTS OF DIATOMIC MOLECULES pp. 9–82
• Morse, P. M. (1929). "Diatomic molecules according to the wave mechanics. II. Vibrational levels". Phys. Rev. 34. pp. 57–64. Bibcode:1929PhRv...34...57M. doi:10.1103/PhysRev.34.57.
• Girifalco, L. A.; Weizer, G. V. (1959). "Application of the Morse Potential Function to cubic metals". Phys. Rev. 114 (3). p. 687. Bibcode:1959PhRv..114..687G. doi:10.1103/PhysRev.114.687.
• Shore, Bruce W. (1973). "Comparison of matrix methods applied to the radial Schrödinger eigenvalue equation: The Morse potential". J. Chem. Phys. 59 (12). p. 6450. Bibcode:1973JChPh..59.6450S. doi:10.1063/1.1680025.
• Keyes, Robert W. (1975). "Bonding and antibonding potentials in group-IV semiconductors". Phys. Rev. Lett. 34 (21). pp. 1334–1337. Bibcode:1975PhRvL..34.1334K. doi:10.1103/PhysRevLett.34.1334.
• Lincoln, R. C.; Kilowad, K. M.; Ghate, P. B. (1967). "Morse-potential evaluation of second- and third-order elastic constants of some cubic metals". Phys. Rev. 157 (3). pp. 463–466. Bibcode:1967PhRv..157..463L. doi:10.1103/PhysRev.157.463.
• Dong, Shi-Hai; Lemus, R.; Frank, A. (2001). "Ladder operators for the Morse potential". Int. J. Quantum Chem. 86 (5). pp. 433–439. doi:10.1002/qua.10038.
• Zhou, Yaoqi; Karplus, Martin; Ball, Keith D.; Bery, R. Stephen (2002). "The distance fluctuation criterion for melting: Comparison of square-well and Morse Potential models for clusters and homopolymers". J. Chem. Phys. 116 (5). pp. 2323–2329. doi:10.1063/1.1426419.
• I.G. Kaplan, in Handbook of Molecular Physics and Quantum Chemistry, Wiley, 2003, p207.
1. F. Cooper, A. Khare, U. Sukhatme, Supersymmetry in Quantum Mechanics, World Scientific, 2001, Table 4.1
2. Dahl, J.P.; Springborg, M. (1988). "The Morse Oscillator in Position Space, Momentum Space, and Phase Space" (PDF). J. Chem. Phys. 88 (7): 4535. Bibcode:1988JChPh..88.4535D. doi:10.1063/1.453761.
3. E. F. Lima and J. E. M. Hornos, "Matrix Elements for the Morse Potential Under an External Field", J. Phys. B: At. Mol. Opt. Phys. 38, pp. 815-825 (2005)
6. Le Roy, Robert J.; R. D. E. Henderson (2007). "A new potential function form incorporating extended long-range behaviour: application to ground-state Ca2". Molecular Physics. 105 (5–7): 663–677. Bibcode:2007MolPh.105..663L. doi:10.1080/00268970701241656.
12. W. Gunton, M. Semczuk, N. S. Dattani, K. W. Madison, High resolution photoassociation spectroscopy of the 6Li2 A-state,
14. Xie, F.; L. Li; D. Li; V. B. Sovkov; K. V. Minaev; V. S. Ivanov; A. M. Lyyra; S. Magnier (2011). "Joint analysis of the Cs2 a-state and 1 g (33Π1g ) states". Journal of Chemical Physics. 135 (2): 02403. Bibcode:2011JChPh.135b4303X. doi:10.1063/1.3606397. PMID 21766938.
22. Li, Gang; I. E. Gordon; P. G. Hajigeorgiou; J. A. Coxon; L. S. Rothman (July 2013). "Reference spectroscopic data for hydrogen halides, Part II:The line lists". Journal of Quantitative Spectroscopy & Radiative Transfer. 130: 284–295. Bibcode:2013JQSRT.130..284L. doi:10.1016/j.jqsrt.2013.07.019.
24. Meshkov, Vladimir V.; Stolyarov, Andrey V.; Heaven, Michael C.; Haugen, Carl; Leroy, Robert J. (2014). "Direct-potential-fit analyses yield improved empirical potentials for the ground XΣg+1 state of Be2". The Journal of Chemical Physics. 140 (6): 064315. doi:10.1063/1.4864355. PMID 24527923.
25. Dattani, Nikesh S.; Le Roy, Robert J. (2015). "Beryllium monohydride (BeH): Where we are now, after 86 years of spectroscopy". Journal of Molecular Spectroscopy. 311: 76–83. arXiv:1408.3301. Bibcode:2015JMoSp.311...76D. doi:10.1016/j.jms.2014.09.005. |
1ea0e53da0497a0a | Sofja Kovalevskaja Award 2006 - Award Winners
Jens Bredenbeck
Molecule dynamics - mainspring of chemistry and biology
Elementary vital functions, chemical reactions, the behaviour of substances in our environment - the driving force behind all of these phenomena is the movement of molecules. Molecules interact and change their structure, sometimes slowly, sometimes at incredible speed. Jens Bredenbeck is developing new measuring techniques which can keep up with the molecules' pace. Multidimensional infrared spectroscopy is the name given to the method which measures molecular motion with ultrashort infrared laser pulses. This molecular motion detector should help us to understand what important processes on the molecular level look like in real time, such as how biomolecules fold themselves into the right structure and how they fulfil their vitally important tasks.
Host Institute: Frankfurt a.M. University, Institute of Biophysics
Host: Prof. Dr. Josef Wachtveitl
• Jens BredenbeckDr. Jens Bredenbeck,
born in Germany in 1975, studied chemistry at Darmstadt Technical University, Göttingen University and Zürich University, Switzerland, where he completed his doctorate at the Institute of Physical Chemistry in 2005. He is currently continuing his research at the FOM Institute for Atomic and Molecular Physics in Amsterdam, Netherlands.
Jure Demsar
Solid State Physics
New impulse for developing usable superconductors
Greenhouse gases, climate change and rising prices - the consequences of our use of energy are onerous. The idea that it might be possible to conduct electrical current without loss, to transform it and use it in engines - in a completely new way - sounds rather like a fairytale. Precisely this, however, i.e. the superconductivity of certain materials, has long since become reality. But only in the lab. Problems with the materials as well as the low temperatures required make it difficult to transform the new superconductors into usable electricity conductors. Thus, Jure Demsar is investigating novel so-called strongly-correlated high temperature superconductors. With the aid of ultrafast laser pulses he is observing in real time how electrons and other excitations behave and interact in this highly correlated superconducting state, and drawing inferences for optimising the material. The dream of loss-free conductors and other new electronic applications could move a step closer thanks to superconductivity research.
Host Institute: Konstanz University, Department of Physics, Modern Optics and Photonics
Host: Prof. Dr. Thomas Dekorsy
• Jure DemsarDr. Jure Demsar,
born in Slovenia in 1970, studied physics at Ljubljana University, where he took his doctorate in 2000. He continued his research in Ljubljana at the Jo¿ef Stefan Institute, in the Complex Matter Department, before receiving a two-year fellowship to research at the Los Alamos National Laboratory in the United States. Since then, Demsar has been working at the Jo¿ef Stefan Institute where he attained his professorial qualification (Habilitation) in 2005.
Felix Engel
Cell biology
Hearts that heal themselves
The human heart is a unique organ in the true sense of the word: adult heart cells are unable to divide. If they die off as a result of a heart attack, for instance, the tissue cannot rebuild itself. Felix Engel is searching for a way of encouraging adult heart cells to divide - a capacity inherent in youthful cells, but one which they lose shortly after birth. As Felix Engel and his colleagues discovered, responsibility for this lies with a protein. If it is blocked, the cell regains its capacity to divide. What has worked in experiments on animals is now supposed to be used to treat humans successfully and thus be developed as an alternative to the controversial treatment with stem cells.
Host Institute: Max Planck Institute for Heart and Lung Research, Bad Nauheim
Host: Prof. Dr. Thomas Braun
• Felix EngelDr. Felix Engel,
born in Germany in 1971, previously worked at the Children's Hospital/Harvard Medical School in Boston. Engel studied biotechnology at Berlin Technical University and completed his doctorate there in 2001 after working on his thesis externally at the Max Delbrück Centre for Molecular Medicine in Berlin.
Natalia Filatkina
Historical Philology
Vom Kerbholz zur Datenbank
What did the German expression, einen blauen Mantel umhängen, mean in the Middle Ages? What is a tally stick, what has it got to do with committing a criminal offence and how does it come about that this term is still used in the same context in modern German? These are the questions being answered by Natalia Filatkina who is investigating the history of such formulaic figures of speech in German. These so-called phraseologisms are, after all, a salient feature of all languages and essential for understanding them. What are the social, historical and cultural phenomena underlying these ancient phraseologisms? What conclusions can be drawn for modern language? So far, there have only been fragmentary investigations in this field. In her pioneering work, which is combining historical philology with the international technologies of markup languages, Natalia Filatkina is preparing an electronic body of texts from the 8th to the 17th centuries and interpreting them according to modern linguistic criteria. In this way, a data base is being created that will bring a part of cultural history nearer not only to an interdisciplinary circle of experts but also to a broad non-academic public and will generate new knowledge for the present day.
Host Institute: Trier University, Department of German, Older German Philology
Host: Prof. Dr. Claudine Moulin
• Natalia FilatkinaDr. Natalia Filatkina,
born in the Russian Federation in 1975, studied at the Moscow State Linguistic University, the Humboldt University Berlin on a DAAD scholarship, the University of Luxembourg, and Bamberg University where she took her doctorate in 2003. Her dissertation on the Luxembourg language was awarded the Prix d'encouragement for young researchers by the University of Luxembourg. She is working in the field of Older German Philology in the Department of German at Trier University.
Olga Holtz
Numerical Analysis
The way out of the data jungle
Whether you are looking at the handling and flying qualities of the new Airbus, developing a new drug to combat Aids or designing the ideal underground timetable for a city with more than a million inhabitants - at some time or other you will have to do some complicated computations. The amount of data computers have to cope with is extremely large, we are talking in terms of millions of equations and unknowns, and they only have a finite number of digits for representing a number. In order to solve this problem using reliable and fast algorithms you need to know as much about computers as mathematics. Olga Holtz is working at the interface of pure and applied mathematics. She is searching for methods which are both fast and reliable - which in this field of applied mathematics is usually a contradiction in terms. Her project, developing a method of matrix multiplication, should provide the solution to a multitude of computational calculations in science and engineering.
Host Institute: Berlin Technical University, Institute of Mathematics
Host: Prof. Dr. Volker Mehrmann
• Olga HoltzDr. Olga Holtz,
born in the Russian Federation in 1973, studied applied mathematics in her own country at the Chelyabinsk State Technical University and at the University of Wisconsin Madison in the United States, where she received a doctorate in mathematics in 2000 and subsequently continued her research in the Department of Computer Sciences. She was a Humboldt Research Fellow at Berlin Technical University before being appointed to the University of California, Berkeley, where she has been working ever since.
Reinhard Kienberger
Electron and Quantum Optics
Using x-ray flashes to visualise inconceivable speed
If you want to observe and understand how chemical bonds evolve, how electrons move in semi-conductors or how light is turned into chemical energy through photosynthesis, you have to be pretty fast, because in these chemical, atomic or biological processes we are dealing with fractions of a second, so-called attoseconds, which last no longer than a trillionth of a second. Reinhard Kienberger has significantly contributed to developing observation methods which use ultra fast, intensive x-ray flashes on the attosecond scale to visualise and, in future maybe even to be able to control what has so far been unobservable. Novel lasers based on ultraviolet light or x-rays as well as improved radiation therapies in medicine are just a few of the possible future applications ensuing from the young discipline of attosecond research.
Host Institute: Max Planck Institute of Quantum Optics, Laboratory for Attosecond and High-Field Physics, Garching, near Munich
Host: Prof. Dr. Ferenc Krausz
• Reinhard Kienberger Dr. Reinhard Kienberger,
studied at Vienna Technical University, Austria, and completed his doctorate there with a dissertation on quantum mechanics in 2002. He subsequently became a fellow of the Austrian Academy of Sciences, researching at Stanford University's Stanford Linear Accelerator Center, Menlo Park in California. He is currently working at the Max Planck Institute of Quantum Optics in Garching.
Marga Cornelia Lensen
Macromolecular Chemistry
Turning to nature: made-to-measure hydrogels for medical systems
If the first thing you associate with a happy baby is a dry nappy, it probably does not occur to you that both the parents and the baby actually have the blessings of biomaterial research to thank for this satisfactory state of affairs. The reason for this is that nappies and other hygiene products for absorbing moisture contain the magic anti-moisture ingredients known as hydrogels. These are three-dimensional polymer networks which can store many times their own weight in water and release it again. Humans have copied this principle from nature where hydrogels proliferate, in plants for instance. But hydrogels have much greater potential than this, for example in bioresearch or medicine. They might release doses of drugs in the body or act as sensors. They might also be used as artificial muscles or to bond natural tissue with artificial implants. This would require gels with properties made-to-measure through utilising nanotechnology. To lay the foundations for this, Marga Cornelia Lensen is investigating ways of changing the structure of the gels and how they interact with cells. Consequently, one of the things she is going to do is to use novel nanoimprint technology, which, so far, has largely been tested on hard material, to structure hydrogels and insert them as carriers for experiments on living cells.
Host Institute: RWTH Aachen, German Wool Research Institute
Host: Prof. Dr. Martin Möller
• Marga Cornelia LensenDr. Marga Cornelia Lensen,
born in the Netherlands in 1977, studied chemistry at Wageningen University and at Radboud University Nijmegen, where she took her doctorate in 2005. As a Humboldt Research Fellow she has been working at her host institute at RWTH Aachen, where she will continue her research as a Kovalevskaja Award Winner, since October 2005.
Martin Lövden
Developmental Psychology
Tracking down the secret of life-long learning
In our aging societies in Europe, the idea of life-long learning has gained a special relevance. But although the learning ability of young brains is considerable and has been well researched, there are not many studies on the reasons for the deterioration of learning ability in old age and how to deal with it. Martin Lövden is investigating the neurochemical, neuroanatomical and neurofunctional conditions for successful learning in old age and the consequences for everyday life. To this end, he uses neuroimaging methods, such as functional resonance imaging and resonance spectroscopy, by which he can observe the brains of old and young test subjects during memory training in order to track down the neurological secret of successful learning and its limitations in old age.
Host Institute: Max Planck Institute for Human Development, Research Area Lifespan Psychology, Berlin
Host: Prof. Dr. Ulman Lindenberger
• Martin LövdenDr. Martin Lövden,
born in Sweden in 1972, studied psychology at Salzburg University in Austria and at the universities of Lund and Stockholm in Sweden as well as neuroscience at the Karolinska Institute in Stockholm. He was awarded his doctorate at Stockholm University in 2002. He continued his research at the Saarland University in Saarbrücken and is currently working at the Max Planck Institute for Human Development in Berlin.
Thomas Misgeld
Nerve fibres: the brain's fast wire
In the nervous system information is transported in the form of electrical impulses. To this end, every nerve cell has an appendage, the function of which is similar to that of a telephone cable - the nerve fibres, also called axons. Axons run through the brain and the spinal cord to the switch points at the nerve roots and have a certain capacity for learning. They are able to adapt to new requirements. Not a lot is known about how this adaptation functions and how axons protect themselves against damage. So, Thomas Misgeld is investigating the axons of living mice using high resolution microscopy. He wants to discover how nerve fibres are nourished, adapted and maintain their efficiency in a healthy organism. This basic information could lead to the development of new therapies for diseases like multiple sclerosis or for spinal cord injuries.
Host Institute: Munich Technical University, Institute of Neurosciences
Host: Prof. Dr. Arthur Konnerth
• Thomas MisgeldDr. Thomas Misgeld,
born in Germany in 1971, studied medicine at Munich Technical University where he completed his doctorate in 1999. He continued his research in the department of clinical neuroimmunology at the Max Planck Institute of Neurobiology in Martinsried and at Washington University in St. Louis. His most recent position was at Harvard University in Cambridge. In 2005, he was granted the first ever Wyeth Multiple Sclerosis Junior Research Award and the Robert Feulgen Prize by the Society for Histochemistry.
Benjamin Schlein
Mathematical Physics
Seeking evidence in the quantum world
In the first half of the 20th century, when physicists observed that new properties were revealed by light interacting with material, classical physics reached its limits. It was the birth of quantum mechanics, the principles of which are part of common knowledge in physics nowadays, such as the fact that material particles exhibit waves, just like light. This is a principle used in modern electron microscopes. One of the main pillars of quantum mechanics is the Schrödinger equation which, to this day, has been very successful in predicting experiments. But when it comes to examining macroscopic systems - i.e. systems composed of multitudes of the tiniest particles - the amount of data is so enormous that even the most modern computers are not powerful enough to solve the Schrödinger equation. Benjamin Schlein is trying to develop mathematical methods which will make it possible to derive simpler equations to describe the dynamics of macroscopic systems. He wants to create a solid mathematical basis on which to assess and develop further applications in quantum mechanics.
Host Institute: Munich University, Institute of Mathematics
Host: Prof. Dr. Laszlo Erdös
• Benjamin SchleinDr. Benjamin Schlein,
born in Switzerland in 1975, studied theoretical physics at the Swiss Federal Institute of Technology (ETH) in Zürich and completed his doctorate there with a dissertation on mathematical physics in 2002. He subsequently continued his research in the United States, at the universities of New York, Stanford, Harvard and California in Davis.
Taolei Sun
Medical Biochemistry
Novel biocompatible materials for medical systems
"Surfaces are a creation of the devil", the famous physicist and Nobel Prize Winner, Wolfgang Pauli, once remarked when he realised how much more complex the surfaces of materials were than their massive substance. Many technical, indeed everyday applications depend on the properties of material surfaces and their interactions, which is especially important in the biomedical fields. Just consider the surfaces of artificial joints and other implants, or artificial access to the human bloodstream in intensive medicine or cancer treatment. All of them have to get along really well with the surfaces of human tissue or human cells. Taolei Sun is working on biocompatible, artificial implants and medical devices, combining modern nanotechnology with chemical surface modification. His aim is to use nanostructured polymeric surfaces with special wettability as a platform for the emergence of a new generation of biocompatible materials.
Host Institute: Universität Münster, Physikalisches Institut
Host: Prof. Dr. Harald Fuchs
• Taolei SunDr. Taolei Sun,
born in China in 1974, studied at Wuhan University and at the Technical Institute of Physics and Chemistry in the Chinese Academy of Sciences in Beijing, where he took his doctorate in 2002, and then continued his research. He subsequently worked at the National Center for Nanosciences and Technology of China in Beijing before becoming a Humboldt Research Fellow in the Institute of Physics at Münster University where he will now carry out research as a Kovalevskaja Award Winner.
Kristina Güroff
Press, Communications
and Marketing
Phone: +49 228 833-144/257
Fax: +49 228 833-441
Georg Scholl
Head of
Press, Communications and Marketing
Phone: +49 228 833-258
Fax: +49 228 833-441 |
82966b30d3a395dc | HomeAstrophysicsLOG#245. What is fundamental?
LOG#245. What is fundamental?
Some fundamental mottos:
Fundamental spacetime: no more?
Fundamental spacetime falls: no more?
Fundamentalness vs emergence(ness) is an old fight in Physics. Another typical mantra is not Shamballa but the old endless debate between what theory is fundamental (or basic) and what theory is effective (or derived). Dualities in superstring/M-theory changed what we usually meant by fundamental and derived, just as the AdS/CFT correspondence or map changed what we knew about holography and dimensions/forces.
Generally speaking, physics is about observables, laws, principles and theories. These entities (objects) have invariances or symmetries related to dynamics and kinematics. Changes or motions of the entities being the fundamental (derived) degrees of freedom of differente theories and models provide relationships between them, with some units, magnitudes and systems of units being more suitable for calculations. Mathematics is similar (even when is more pure). Objects are related to theories, axioms and relations (functors and so on). Numbers are the key of mathematics, just as they measure changes in forms or functions that serve us to study geometry, calculus and analysis from different abstract-like viewpoints.
The cross-over between Physics and Mathematics is called Physmatics. The merger of physics and mathematics is necessary and maybe inevitable to understand the whole picture. Observers are related to each other through transformations (symmetries) that also holds for force fields. Different frameworks are allowed in such a way that the true ideal world becomes the real world. Different universes are possible in mathematics an physics, and thus in physmatics too. Interactions between Universes are generally avoided in physics, but are a main keypoint for mathematics and the duality revolution (yet unfinished). Is SR/GR relativity fundamental? Is QM/QFT fundamental? Are fields fundamental? Are the fundamental forces fundamental? Is there a unique fundamental force and force field? Is symplectic mechanics fundamental? What about Nambu mechanics? Is the spacetime fundamental? Is momenergy fundamental?
Newtonian physics is based on the law
(1) \begin{equation*} F^i=ma_i=\dfrac{dp_i}{dt} \end{equation*}
Relativistic mechanics generalize the above equation into a 4d set-up:
(2) \begin{equation*} \mathcal{F}=\dot{\mathbcal{P}}=\dfrac{d\mathcal{P}}{d\tau} \end{equation*}
and p_i=mv_i and \mathcal{P}=M\mathcal{V}. However, why not to change newtonian law by
(3) \begin{equation*}F_i=ma_0+ma_i+\varepsilon_{ijk}b^ja^k+\varepsilon_{ijk}c^jv^k+c_iB^{jk}a_jb_k+\cdots\end{equation*}
(4) \begin{equation*}\vec{F}=m\vec{a}_0+m\vec{a}+\vec{b}\times\vec{a}+\vec{c}\times\vec{v}+\vec{c}\left(\vec{a}\cdot\overrightarrow{\overrightarrow{B}} \cdot \vec{b}\right)+\cdots\end{equation*}
Quantum mechanics is yet a mystery after a century of success! The principle of correspondence
(5) \begin{equation*} p_\mu\rightarrow -i\hbar\partial_\mu \end{equation*}
allow us to arrive to commutation relationships like
(6) \begin{align*} \left[x,p\right]=i\hbar\varepsilon^j_{\;\; k}\\ \left[L^i,L^j\right]=i\hbar\varepsilon_{k}^{\;\; ij}L^k\\ \left[x_\mu,x_\nu\right]=\Theta_{\mu\nu}=iL_p^2\theta_{\mu\nu}\\ \left[p_\mu,p_\nu\right]=K_{\mu\nu}=iL_{\Lambda}K_{\mu\nu} \end{align*}
and where the last two lines are the controversial space-time uncertainty relationships if you consider space-time is fuzzy at the fundamental level. Many quantum gravity approaches suggest it.
Let me focus now on the case of emergence and effectiveness. Thermodynamics is a macroscopic part of physics, where the state variables internal energy, free energy or entropy (U,H,S,F,G) play a big role into the knowledge of the extrinsinc behaviour of bodies and systems. BUT, statistical mechanics (pioneered by Boltzmann in the 19th century) showed us that those macroscopic quantities are derived from a microscopic formalism based on atoms and molecules. Therefore, black hole thermodynamics point out that there is a statistical physics of spacetime atoms and molecules that bring us the black hole entropy and ultimately the space-time as a fine-grained substance. The statistical physics of quanta (of action) provides the basis for field theory in the continuum. Fields are a fluid-like substance made of stuff (atoms and molecules). Dualities? Well, yet a mystery: they seem to say that forces or fields you need to describe a system are dimension dependent. Also, the fundamental degrees of freedom are entangled or mixed (perhaps we should say mapped) to one theory into another.
I will speak about some analogies:
1st. Special Relativity(SR) involves the invariance of objects under Lorentz (more generally speaking Poincaré) symmetry: X'=\Lambda X. Physical laws, electromagnetism and mechanics, should be invariant under Lorentz (Poincaré) transformations. That will be exported to strong forces and weak forces in QFT.
2nd. General Relativity(GR). Adding the equivalence principle to the picture, Einstein explained gravity as curvature of spacetime itself. His field equations for gravity can be stated into words as the motto Curvature equals Energy-Momentum, in some system of units. Thus, geometry is related to dislocations into matter and viceversa, changes in the matter-energy distribution are due to geometry or gravity. Changing our notion of geometry will change our notion of spacetime and the effect on matter-energy.
3rd. Quantum mechanics (non-relativistic). Based on the correspondence principle and the idea of matter waves, we can build up a theory in which particles and waves are related to each other. Commutation relations arise: \left[x,p\right]=i\hbar, p=h/\lambda, and the Schrödinger equation follows up H\Psi=E\Psi.
4th. Relativistic quantum mechanics, also called Quantum Field Theory(QFT). Under gauge transformations A\rightarrow A+d\varphi, wavefunctions are promoted to field operators, where particles and antiparticles are both created and destroyed, via
\[\Psi(x)=\sum a^+u+a\overline{u}\]
Fields satisfy wave equations, F(\phi)=f(\square)\Phi=0. Vacuum is the state with no particles and no antiparticles (really this is a bit more subtle, since you can have fluctuations), and the vacuum is better defined as the maximal symmetry state, \ket{\emptyset}=\sum F+F^+.
5th. Thermodynamics. The 4 or 5 thermodynamical laws follow up from state variables like U, H, G, S, F. The absolute zero can NOT be reached. Temperature is defined in the thermodynamical equilibrium. dU=\delta(Q+W), \dot{S}\geq 0. Beyond that, S=k_B\ln\Omega.
6th. Statistical mechanics. Temperature is a measure of kinetic energy of atoms an molecules. Energy is proportional to frequency (Planck). Entropy is a measure of how many different configurations have a microscopic system.
7th. Kepler problem. The two-body problem can be reduce to a single one-body one-centre problem. It has hidden symmetries that turn it integrable. In D dimensions, the Kepler problem has a hidden O(D) (SO(D) after a simplification) symmetry. Beyond energy and angular momentum, you get a vector called Laplace-Runge-Lenz-Hamilton eccentricity vector that is also conserved.
8th. Simple Harmonic Oscillator. For a single HO, you also have a hidden symmetry U(D) in D dimensions. There is an additional symmetric tensor that is conserved.
9th. Superposition and entanglement. Quantum Mechanics taught us about the weird quantum reality: quantum entities CAN exist simultaneously in several space position at the same time (thanks to quantum superposition). Separable states are not entangled. Entangled states are non-separable. Wave functions of composite systems can sometimes be entangled AND non-separated into two subsystems.
Information is related, as I said in my second log post, to the sum of signal and noise. The information flow follows from a pattern and a dissipative term in general. Classical geometry involves numbers (real), than can be related to matrices(orthogonal transformations or galilean boosts or space traslations). Finally, tensor are inevitable in gravity and riemannian geometry that follows up GR. This realness can be compared to complex geometry neceessary in Quantum Mechanics and QFT. Wavefunctions are generally complex valued functions, and they evolve unitarily in complex quantum mechanics. Quantum d-dimensional systems are qudits (quinfits, or quits for short, is an equivalent name for quantum field, infinite level quantum system):
(7) \begin{align*} \vert\Psi\rangle=\vert\emptyset\rangle=c\vert\emptyset\rangle=\mbox{Void/Vacuum}\ \langle\Psi\vert\Psi\rangle=\vert c\vert^2=1 \end{align*}
(8) \begin{align*} \vert\Psi\rangle=c_0\vert 0\rangle+c_1\vert 1\rangle=\mbox{Qubit}\\ \langle\Psi\vert\Psi\rangle=\vert c_0\vert^2+\vert c_1\vert^2=1\\ \vert\Psi\rangle=c_0\vert 0\rangle+c_1\vert 1\rangle+\cdots+c_{d-1}\vert d\rangle=\mbox{Qudit}\\ \sum_{i=0}^{d-1}\vert c_i\vert^2=1 \end{align*}
(9) \begin{align*} \vert\Psi\rangle=\sum_{n=0}^\infty c_n\vert n\rangle=\mbox{Quits}\\ \langle\Psi\vert\Psi\rangle=\sum_{i=0}^\infty \vert c_i\vert^2=1:\mbox{Quantum fields/quits} \end{align*}
(10) \begin{align*} \vert\Psi\rangle=\int_{-\infty}^\infty dx f(x)\vert x\rangle:\mbox{conquits/continuum quits}\\ \mbox{Quantum fields}: \int_{-\infty}^\infty \vert f(x)\vert^2 dx = 1\\ \sum_{i=0}^\infty\vert c_i\vert^2=1\\ L^2(\matcal{R}) \end{align*}
0.1. SUSY The Minimal Supersymmetry Standard Model has the following set of particles:
To go beyond the SM, BSM, and try to explain vacuum energy, the cosmological constant, the hierarchy problem, dark matter, dark energy, to unify radiation with matter, and other phenomena, long ago we created the framework of supersymmetry (SUSY). Essentially, SUSY is a mixed symmetry between space-time symmetries and internal symmetries. SUSY generators are spinorial (anticommuting c-numbers or Grassmann numbers). Ultimately, SUSY generators are bivectors or more generally speaking multivectors. The square of a SUSY transformation is a space-time traslation. Why SUSY anyway? There is another way, at least there were before the new cosmological constant problem (lambda is not zero but very close to zero). The alternative explanation of SUSY has to do with the vacuum energy. Indeed, originally, SUSY could explain why lambda was zero. Not anymore and we do neeed to break SUSY somehow. Otherwise, breaking SUSY introduces a vacuum energy into the theories. Any superalgebra (supersymmetric algebra) has generators P_\mu, M_{\mu\nu}, Q_\alpha. In vacuum, QFT says that fields are a set of harmonic oscillators. For sping j, the vacuum energy becomes
(52) \begin{equation*} \varepsilon_0^{(j)}=\dfrac{\hbar \omega_j}{2} \end{equation*}
(53) \begin{equation*} \omega_j=\sqft{k^2+m_j^2} \end{equation*}
Vacuum energy associated to any oscillator is
(54) \begin{equation*} E_0^{(j)}=\sum \varepsilon_0^{(j)}=\dfrac{1}{2}(-1)^{2j}(2j+1)\displaystyle{\sum_k}\hbar\sqrt{k^2+m_j^2} \end{equation*}
Taking the continuum limit, we have the vacuum master integral, the integral of cosmic energy:
(55) \begin{equation*} E_0(j)=\dfrac{1}{2}(-1)^{2j}(2j+1)\int_0^\Lambda d^3k\sqrt{k^2+m_j^2} \end{equation*}
Develop the square root in terms of m/k up to 4th order, to get
(56) \begin{equation*} E_0(j)=\dfrac{1}{2}(-1)^{2j}(2j+1)\int_0^\Lambda d^3k k\left[1+\dfrac{m_j^2}{2k^2}-\dfrac{1}{8}\left(\dfrac{m_j^2}{k^2}\right)^2+\cdots\right] \end{equation*}
(57) \begin{equation*} E_0(j)=A(j)\left[a_4\Lambda^4+a_2\Lambda^2+a_{log}\log(\Lambda)+\cdots\right] \end{equation*}
If we want absence of quadratic divergences, associated to the cosmological constant, and the UV cut-off, we require
(58) \begin{equation*} \tcboxmath{ \sum_j(-1)^{2j}(2j+1)=0} \end{equation*}
If we want absence of quadratic divergences, due to the masses of particles as quantum fields, we need
(59) \begin{equation*} \tcboxmath{\sum_j(-1)^{2j}(2j+1)m_j^2=0} \end{equation*}
Finally, if we require that there are no logarithmic divergences, associated to the behavior to long distances and renormalization, we impose that
(60) \begin{equation*} \tcboxmath{\sum_j(-1)^{2j}(2j+1)m_j^4=0} \end{equation*}
Those 3 sum rules are verified if, simultaneously, we have that
(61) \begin{equation*} N_B=N_F \end{equation*}
(62) \begin{equation*} M_B=M_F \end{equation*}
That is, equal number of bosons and fermions, and same masses of all the boson and fermion modes. These conditions are satisfied by SUSY, but the big issue is that the SEM is NOT supersymmetric and that the masses of the particles don’t seem to verify all the above sum rules, at least in a trivial fashion. These 3 relations, in fact, do appear in supergravity and maximal SUGRA in eleven dimensions. We do know that 11D supergravity is the low energy limit of M-theory. SUSY must be broken at some energy scale we don’t know where and why. In maximal SUGRA, at the level of 1-loop, we have indeed those 3 sum rules plus another one. In compact form, they read
(63) \begin{equation*} \tcboxmath{\sum_{J=0}^{2}(-1)^{2J}(2J+1)(M^{2}_J)^k=0,\;\;\; k=0,1,2,3} \end{equation*}
Furthermore, these sum rules imply, according to Scherk, that there is a non zero cosmological constant in maximal SUGRA.
\textbf{Exercise}. Prove that the photon, gluon or graviton energy density can be written in the following way
In addition to that, prove that the energy density of a fermionic massive m field is given by
Compare the physical dimensions in both cases.
0.2. Extra dimensions D-dimensional gravity in newtonian form reads:
(64) \begin{equation*} F_G=G_N(D)\dfrac{Mm}{r^{D-2}} \end{equation*}
Compatifying extra dimensions:
(65) \begin{equation*} F_G=G_N(D)\dfrac{Mm}{L^Dr^2} \end{equation*}
and then
(66) \begin{equation*} \tcboxmath{ G_4=\dfrac{G_N(D)}{L^D}} \end{equation*}
or with M_P^2=\dfrac{\hbar c}{G_N},
(67) \begin{equation*} \tcboxmath{M_P^2=V(XD)M_\star^2} \end{equation*}
Thus, weakness of gravity is explained due to dimensional dilution.
Similarly, for gauge fields:
(68) \begin{equation*} \tcboxmath{ g^2(4d)=\dfrac{g^2(XD)}{V_X}} \end{equation*}
View ratings
Rate this article
Leave a Reply
0 visitors online now
0 guests, 0 members
Max visitors today: 4 at 09:01 am UTC
This month: 10 at 10-25-2020 11:14 am UTC
This year: 54 at 01-21-2020 01:53 am UTC
All time: 177 at 11-13-2019 10:44 am UTC |
32d0ca5d97bab5a5 | Facts So Romantic
The Quantum Theory That Peels Away the Mystery of Measurement
Reprinted with permission from Quanta Magazine’s Abstractions blog.
Imagine if all our scientific theories and models told us only about averages: if the best weather forecasts could only give you the average daily amount of rain expected over the next month, or if astronomers could only predict the average time between solar eclipses.
A recent test has confirmed the predictions of quantum trajectory theory, which describes what happens during the long-mysterious “collapse” of a quantum system.Pixabay
In the early days of quantum mechanics, that seemed to be its inevitable limitation: It was a probabilistic theory, telling us only what we will observe on average if we collect records for many events or particles. To Erwin Schrödinger, whose eponymous equation prescribes how quantum objects behave, it was utterly meaningless to think about specific atoms or electrons doing things in real time. “It is fair to state,” he wrote in 1952, “that we are not experimenting with single particles. … We are scrutinizing records of events long after they have happened.” In other words, quantum mechanics seemed to work only for “ensembles” of many particles. “When the ensemble is large enough, it’s possible to acquire sufficient statistics to check if the predictions are correct or not,” said Michel Devoret, a physicist at Yale University.
In a direct challenge to Schrödinger’s pessimistic view, “QTT deals precisely with single particles and with events right as they are happening,” said Zlatko Minev, who completed his doctorate in Devoret’s lab at Yale. By applying QTT to an experiment on a quantum circuit, Minev and his co-workers were recently able to capture a “quantum leap”—a switch between two quantum energy states—as it unfolded over time. They were also to achieve the remarkable feat of catching such a jump in midflight and reversing it.
“Quantum trajectory theory makes predictions that are impossible to make with the standard formulation,” Devoret said. In particular, it can predict how individual quantum objects such as particles will behave when they are observed—that’s to say, when measurements are made on them.
Schrödinger’s equation can’t do that. It predicts perfectly how an object will evolve over time if we don’t measure it. But add in measurement and all you can get from the Schrödinger equation is a prediction of what you’ll see on average over many measurements, not what any individual system will do. It won’t tell you what to expect from a lone quantum jump, for example.
Measurement derails the Schrödinger equation because of a peculiar phenomenon called quantum back-action. A quantum measurement influences the system being observed: The act of observation injects a kind of random noise into the system. This is ultimately the source of Heisenberg’s famous uncertainty principle. The uncertainty in a measurement is not, as Heisenberg initially thought, an effect of clumsy intervention in a delicate quantum system—a photon striking a particle and pushing it off course, say. Rather, it’s an unavoidable outcome of the intrinsically randomizing effect of observation itself. The Schrödinger equation does just fine at predicting how a quantum system evolves—unless you measure it, in which case the result is unpredictable.
Quantum back-action can be thought of as an imperfect alignment between the system and the measuring apparatus, Devoret said, because you don’t know what the system is like until you look. He compares it to an observation of a planet using a telescope. If the planet isn’t quite in the center of the telescope’s frame, the image will be fuzzy.
QTT, however, can take back-action into account. The catch is that, to apply QTT, you need to have nearly complete knowledge about the behavior of the system you’re observing. Normally, an observation of a quantum system overlooks a lot of potentially available information: Some emitted photons get lost in their environment, say. But if pretty much everything is measured and known about the system—including the random consequences of the back-action—then you can build feedback into the measurement apparatus that will make continuous adjustments to compensate for the back-action. It’s equivalent to adjusting the telescope’s orientation to keep the planet in the center.
For this to work, the measurement apparatus has to collect data faster than the rate at which the system undergoes significant change, and it has to do so with nearly perfect efficiency. “Essentially all the information leaving the system and being absorbed by the environment must pass through the measurement apparatus and be recorded,” Devoret said. In the astronomical analogy, the planet would have to be illuminated only by light coming from the observatory, which would somehow also collect all the light that’s reemitted.
Achieving this degree of control and information capture is very challenging. That’s why, although QTT has been around for a couple decades, “it is only within the past five years that we can experimentally test it,” said William Oliver of the Massachusetts Institute of Technology. Minev developed innovations to ensure quantum-measurement efficiencies of up to 91 percent, and “this key technological development is what allowed us to turn the prediction into a verifiable, implementable experiment,” he said.
With these innovations, “it’s possible to know at all times where the system is, given its recent past history, even if some features of the motion are rendered unpredictable in the long term,” Devoret said. What’s more, this near-complete knowledge of how the system changes smoothly over time allows researchers to “rewind the tape” and avoid the apparently irreversible “wave function collapse” of the standard quantum formalism. That’s how the researchers were able to reverse a quantum jump in midflight.
The excellent agreement between the predictions of QTT and the experimental results suggests something deeper than the mere fact that the theory works for single quantum systems. It means that the highly abstract “quantum trajectory” that the theory refers to (a term coined in the 1990s by physicist Howard Carmichael, a coauthor of the Yale paper) is a meaningful entity—in Minev’s words, it “can be ascribed a degree of reality.” This contrasts with the common view when QTT was first introduced, which held that it was just a mathematical tool with no clear physical significance.
But what exactly is this trajectory? One thing is clear: It’s not like a classical trajectory, meaning a path taken in space. It’s more like the path taken through the abstract space of possible states the system might have, which is called Hilbert space. In traditional quantum theory, that path is described by the wave function of the Schrödinger equation. But crucially, QTT can also address how measurements affect that path, which the Schrödinger equation can’t do. In effect, the theory uses careful and complete observations of the way the system has behaved so far to predict what it will do in the future.
You might loosely compare this to forecasting the trajectory of a single air molecule. The Schrödinger equation plays a role a bit like the classical diffusion equation, which predicts how far on average such a particle travels over time as it undergoes collisions. But QTT predicts where a specific particle will go, basing its forecast on detailed information about the collisions the particle has experienced already. Randomness is still at play: You can’t perfectly predict a trajectory in either case. But QTT will give you the story of an individual particle—and the ability to see where it might be headed next.
Philip Ball is a writer based in London. His latest book is How To Grow A Human: Adventures in Who We Are and How We Are Made
Get the Nautilus newsletter
The newest and most popular articles delivered right to your inbox!
Join the Discussion |
6c591d4568e5449f |
High-order harmonic generation enhanced by XUV light
Christian Buth, Markus C. Kohler, Joachim Ullrich, and Christoph H. Keitel
The combination of high-order harmonic generation (HHG) with resonant xuv excitation of a core electron into the transient valence vacancy that is created in the course of the HHG process is investigated theoretically. In this setup, the first electron performs a HHG three-step process whereas, the second electron Rabi flops between the core and the valence vacancy. The modified HHG spectrum due to recombination with the valence and the core is determined and analyzed for krypton on the resonance in the ion. We assume an laser with an intensity of about and xuv radiation from the Free Electron Laser in Hamburg (FLASH) with an intensity in the range . Our prediction opens perspectives for nonlinear xuv physics, attosecond x rays, and HHG-based spectroscopy involving core orbitals.
Max-Planck-Institut für Kernphysik, Saupfercheckweg 1, 69117 Heidelberg, Germany
Argonne National Laboratory, Argonne, Illinois 60439, USA
Max Planck Advanced Study Group at the Center for Free-Electron Laser Science, Notkestraße 85, 22607 Hamburg, Germany
Corresponding author:
190.2620, 140.2600, 190.7220, 260.6048.
High-order harmonic generation (HHG) by atoms in intense optical laser fields is a fascinating phenomenon and a versatile tool; it has spawned the field of attoscience, is used for spectroscopy, and serves as a light source in many optical laboratories [1]. Present-day theory of HHG largely gravitates around the single-active electron (SAE) approximation and the restriction to HHG from valence electrons [2, 3, 4, 1].
Several extensions to the SAE view of HHG have been investigated previously. A two-electron scheme was considered that uses sequential double ionization by an optical laser with a subsequent nonsequential double recombination; in helium this leads to a second plateau with about 12 orders of magnitude lower yield than the primary HHG plateau [5]. Two-color HHG (optical plus xuv light) has been studied in a one-electron model [6] and with many-electron effects included by a frequency-dependent polarization [7]; the xuv radiation assists thereby in the ionization process leading to an overall increased yield [6] and the emergence of a new plateau [7], the latter, however, at a much lower yield. The above schemes suffer from tiny conversion efficiency beyond the conventional HHG cutoff (maximum photon energy).
We propose an efficient two-electron scheme for a HHG process manipulated by intense xuv light from the newly constructed free electron lasers (FEL)—e.g., the Free Electron Laser in Hamburg (FLASH). Our principal idea is sketched in Fig. 1. In the parlance of the three-step model [2, 3], HHG proceeds as follows: (a) the atomic valence is tunnel ionized; (b) the liberated electron propagates freely in the electric field of the optical laser; (c) the direction of the optical laser field is reversed and the electron is driven back to the ion and eventually recombines with it emitting HHG radiation. The excursion time of the electron from the ion is approximately for typical optical laser light. During this time, one can manipulate the ion such that the returning electron sees the altered ion as depicted in Fig. 1. Then, the emitted HHG radiation bears the signature of the change. Perfectly suited for this modification during the propagation step is xuv excitation of an inner-shell electron into the valence shell. The recombination of the returning electron with the core hole leads to a large increase of the energy of the emitted HHG light as the energy of the xuv photons is added shifting the HHG spectrum towards higher energies. A prerequisite for this to work certainly is that the core hole is not too short lived, i.e., it should not decay before the continuum electron returns.
(Color online) Schematic of the three-step model for the
HHG process augmented by
Figure 1: (Color online) Schematic of the three-step model for the HHG process augmented by xuv excitation of a core electron.
The spatial one-electron states of relevance to the problem are the valence state and the core state of the closed-shell atom. In strong-field approximation, continuum electrons are described by free-electron states for all [4]. The associated level energies are , , and . We need to consider three different classes of two-electron basis states to describe the two-electron dynamics: first, the ground state of the two-electron system is given by the Hartree product ; second, the valence-ionized states with one electron in the continuum and one electron in the core state are ; third, the core-ionized states with one electron in the continuum and one electron in the valence state are . We apply the three assumptions of Lewenstein et al. [4] in a somewhat modified way by considering also phenomenological decay widths of the above three state: and to account for losses due to ionization by the optical and xuv light for and , respectively, and to represent losses from ionization by the optical and xuv light and Auger decay of core holes for with radiative decay of the core hole being safely neglected. Further, the xuv light induces Rabi flopping in the two-level system of and .
The two-electron Hamiltonian of the atom in two-color light (optical laser and xuv) reads ; it consist of three parts: the atomic electronic structure , the interaction with the optical laser , and the interaction with the xuv light . We construct mostly from tensorial products of the corresponding one-particle Hamiltonians , , and . The interaction with the optical and xuv light is treated in dipole approximation in length form [8].
We make the following ansatz for the two-electron wavepacket (in atomic units)
where we introduce a global phase factor based on . The detuning of the xuv photon energy from the energy difference of the two ionic levels is . The index on the amplitudes and indicates which orbital contains the hole.
We insert into the time-dependent Schrödinger equation and project onto the three classes of basis states which yields equations of motion (EOMs) for the involved coefficients. We obtain the following EOM for the ground-state population
The other two EOMs are written as a vector equation, defining the amplitudes , the Rabi frequency [9] for continuous-wave (CW) xuv light and the Rabi matrix
This yields for a CW optical laser electric field oscillating with angular frequency :
We change to the basis of xuv-dressed states using the eigenvectors and eigenvalues , of .
To determine the HHG spectrum, we solve Eq. (2) by neglecting the second term on the right-hand side as in Ref. [4]—its influence is included in , , and —and a constant xuv flux starting at and ending at . The HHG spectrum is given by the Fourier transform of , where is the two-electron electric dipole operator [8], is the ground-state part of the wavepacket (High-order harmonic generation enhanced by XUV light) and is the continuum part, i.e.,
Here, , the ponderomotive potential of the optical laser is , and , are defined as in Ref. [4] augmented by and with replaced by . Further, are Bessel functions and are the coefficients defined in Eq. (17) of Ref. [4] for valence- and core-hole recombination. Further,
Neglecting the factor for now, we see that the peak at . In other words, for the peaks are at the positions of the harmonics from optical laser-only HHG; for , the harmonics are shifted by with respect to the harmonics for such that, in general, none of them coincides with harmonics from . The harmonic photon number spectrum (HPNS) for a single atom—the probability to find a photon with specified energy—along the axis is given by
with the density of free-photon states [9] and the solid angle .
We apply our theory to krypton atoms. The energy levels are for Kr [10] and for Kr with a radial dipole transition matrix element of [8]. The xuv light has the photon energy . The optical laser intensity is set to at a wavelength of . Both xuv and optical light have a pulse duration of . The experimental value for the decay width of Kr vacancies is [11]. The decay widths due to xuv ionization of the atom and the ion are obtained from the sum of the photoionization cross sections of the energetically accessible electrons [12, 13]; the ionization due to the optical laser is determined with Ref. [14] and is larger than the width for xuv ionization and Auger decay for the chosen parameters. We find for an xuv-intensity of : , , and , and for an xuv-intensity of : , , and .
(Color online) Photon number of (Color online) Photon number of
Figure 2: (Color online) Photon number of th harmonic order for xuv intensities of (a) and (b) . The black solid lines show the contribution from recombination with a valence hole whereas the red dashed lines correspond to recombination with a core hole. The lines represent harmonic strengths obtained by integrating over peaks in the HPNS.
In Fig. 2 we show the single-atom HPNS of HHG which is modified by xuv light. We find that the xuv excitation leads to two plateaus, one from valence- and one from core-hole recombination which overlap slightly. The width of the overlap can be tuned by changing the optical laser intensity and interferences between both terms may occur. Even in the case of a moderate xuv intensity [Fig. 2a], the emission rate of HHG from core-hole recombination is substantial. This prediction can be interpreted in terms of an excitation (or even Rabi flopping) [9] of the remaining electron after tunnel ionization of an atom in the HHG process [Fig. 1b]. This electron sits in a two-level system where the electron is either in the valence or the core level. The strength of the HHG emission due to core-hole recombination is roughly proportional to the population of the upper state around . For the population is while it is for . Finally, we need to realize that the dipole matrix element for a recombination with a Kr hole is not substantially different from the one with a Kr hole thus explaining the similar yield of both contributions in Fig. 2b.
The xuv radiation from present-day FELs is generated with the self-amplification of spontaneous emission (SASE) principle; it is fully transversally coherent but exhibits only limited longitudinal coherence. We find that the fluctuating phase does not destroy the spectra [8]. Thus our interpretation of Fig. 2 is not invalidated when we relax the view assumed so far of entirely coherent xuv light with constant amplitude.
In conclusion, we predict HHG light from resonant excitation of transient ions in HHG that allows insights into the physics of core electrons and has various applications: it allows one to generate isolated attosecond x-ray pulses by ionizing atoms near the crests of a single-cycle optical laser pulse and selecting the highest photon energies by filtering [15]. This complements FEL-based strategies to generate attosecond x rays (Ref. [16] and References therein). Our scheme has the advantage that the attosecond pulses have a defined phase-relation to the optical laser and that it can be employed at any FEL with moderate cost and minimal impact on other experiments. Further, our scheme has the potential to become an in situ probe of the dynamics of cations in strong optical fields interacting with intense xuv light. Namely, the HHG spectra depend sensitively on the xuv pulse shape; a reconstruction with frequency resolved optical gating (FROG) [17] may be possible—thus offering the long-sought pulse characterization for SASE xuv light—but requires further theoretical investigation. Additionally, the emitted upshifted light due to core recombination bears the signature of the core orbital; thus it can be used for ultrafast time-dependent chemical imaging [18] involving inner shells which is not feasible so far. This allows one to extend such HHG-based methods to all orbitals that couple to the transient valance vacancy by suitably tuned xuv light. Our findings are not restricted to krypton but HHG spectra for resonant excitation of electrons in neon were successfully computed and will be discussed in future work.
C.B. and M.C.K. were supported by a Marie Curie International Reintegration Grant within the 7 European Community Framework Program (call identifier: FP7-PEOPLE-2010-RG, proposal No. 266551). C.B.’s work was partially funded by the Office of Basic Energy Sciences, Office of Science, U.S. Department of Energy, under Contract No. DE-AC02-06CH11357.
|
ef26e814ec80d68a |
Using effective field theory to analyse low-energy Compton scattering data from protons and light nuclei
H. W. Griehammer, J. A. McGovern,
D. R. Phillips and G. Feldman
Institute for Nuclear Studies, Department of Physics,
The George Washington University, Washington DC 20052, USA
Theoretical Physics Group, School of Physics and Astronomy,
The University of Manchester, Manchester, M13 9PL, UK
Institute of Nuclear and Particle Physics and Department of
Physics and Astronomy, Ohio University, Athens OH 45701, USA
Compton scattering from protons and neutrons provides important insight into the structure of the nucleon. For photon energies up to about MeV, the process can be parameterised by six dynamical dipole polarisabilities which characterise the response of the nucleon to a monochromatic photon of fixed frequency and multipolarity. Their zero-energy limit yields the well-known static electric and magnetic dipole polarisabilities and , and the four dipole spin polarisabilities. The emergence of full lattice QCD results and new experiments at MAMI (Mainz), HIS at TUNL, and MAX-Lab (Lund) makes this an opportune time to review nucleon Compton scattering. Chiral Effective Field Theory (EFT) provides an ideal analysis tool, since it encodes the well-established low-energy dynamics of QCD while maintaining an appropriately flexible form for the Compton amplitudes of the nucleon. The same EFT also describes deuteron and He Compton scattering, using consistent nuclear currents, rescattering and wave functions, and respects the low-energy theorems for photon-nucleus scattering. It can thus also be used to extract useful information on the neutron amplitude from Compton scattering on light nuclei. We summarise past work in EFT on all of these reactions and compare with other theoretical approaches. We also discuss all proton experiments up to about MeV, as well as the three modern elastic deuteron data sets, paying particular attention to the precision and accuracy of each set. Constraining the parameters from the resonance region, we then perform new fits to the proton data up to MeV, and a new fit to the deuteron data. After checking in each case that a two-parameter fit is compatible with the respective Baldin sum rules, we obtain, using the sum-rule constraints in a one-parameter fit, , , for the proton polarisabilities, and , , for the isoscalar polarisabilities, each in units of . Finally, we discuss plans for polarised Compton scattering on the proton, deuteron, He and heavier targets, their promise as tools to access spin polarisabilities, and other future avenues for theoretical and experimental investigation.
Keywords: Compton scattering, proton, neutron and nucleon polarisabilities, spin polarisabilities, Chiral Perturbation Theory, Effective Field Theory, resonance
1 Introduction
Compton scattering has played a major role in the development of modern Physics. In 1871, Lord Rayleigh employed the recently discovered nature of light as an electromagnetic wave to demonstrate that the cross section for light scattering with frequency from neutral atoms behaves as , thereby explaining why the sky is blue [1, 2]. Fifty-six years later, Arthur Holly Compton won the Nobel Prize “for his discovery of the effect named after him”: X-rays have a longer wavelength after they are scattered from electrons, with the difference precisely predicted by a quantum treatment of the electromagnetic radiation and by the relativistic kinematics of the electron [3]. Compton’s experiment thus provided a unified demonstration of two of the great advances in Physics in the early 20th century: relativity and the particle-like nature of light.
In 1935, Bethe and Peierls performed the first calculation of Compton scattering from a nucleus: the deuteron [4]. The field gained momentum in the second half of the 20th century as it was realised that the photon provides a clean probe for Nuclear Physics. Its interactions with the target can be treated perturbatively, with the fine structure constant as a small parameter.111We use the Heaviside-Lorentz system of electromagnetic units and , so . The factor in Eq. (1.2) is absent in the Gaussian system, but present for SI units; cf. [10]. The leading term in the scattering of radiation from a nucleus of mass and atomic number in the long-wavelength limit was first calculated by Rayleigh’s student, J. J. Thomson:
where is the scattering angle. Thirring, and independently Sachs and Austern, showed that Eq. (1.1) is not renormalised non-relativistically [5, 6]. It is also recovered as part of a low-energy theorem for a spin- target due to Low, Gell-Mann, and Goldberger [7, 8] which invokes only analyticity and gauge, Lorentz, parity and time-reversal invariance, and was generalised to arbitrary spin by Friar [9]. The theorem states that the nuclear magnetic moment is the only additional parameter needed to determine the leading spin dependence of the cross section.
1.1 The importance of dipole electric and magnetic polarisabilities
For a composite system, the first spin-independent piece of the Compton amplitude beyond the Thomson limit is parameterised by two structure constants: the electric and magnetic (scalar dipole) polarisabilities. Polarisabilities arise because the electric and magnetic fields of a real monochromatic photon with frequency displace the charged constituents of the system and thus induce charge and current multipoles, even if the target is overall charge neutral. The dominant contributions are typically an induced electric dipole moment , often generated by separating positive and negative charges along the dipole component of the electric field , and a magnetic dipole moment , often generated from currents induced by the dipole component of the magnetic field . In addition, aligning microscopic permanent electric and magnetic dipoles in the external fields can generate mesoscopic induced electric and magnetic dipoles. The linear response in frequency space demonstrates that the induced dipoles are proportional to the incident electric and magnetic fields, i.e.,
Neglecting recoil corrections, these induced dipoles then re-radiate at the same frequency and with the angular dependence characteristic of and radiation, respectively. The proportionality constants in Eq. (1.2) are the electric dipole polarisability and the magnetic dipole polarisability . They characterise the strength of the dipole radiation relative to the intensity of the incoming fields and vanish for objects with no internal structure. Since they lead to different angular dependences, they can be disentangled in the differential Compton scattering cross section at fixed frequency .
Polarisabilities encode the temporal response of the target to a real photon of energy and thus provide detailed information on the masses, charges, interactions, etc. of its active internal degrees of freedom. As an example, in the long-wavelength approximation, the Lorentz-Drude model assumes that the electric field of the photon displaces point-like, charged constituents of mass , bound in classical harmonic oscillators with resonance energies and damping factors :
One can estimate in the nucleon semiclassically by representing the constituents of its charged pion cloud as objects which are harmonically bound to the nucleon core with an eigenfrequency such that the pion cloud has the same root-mean-square radius as the nucleon, namely about . Eq. (1.3) then leads to a value of . This model is unrealistic, of course, but the estimate is surprisingly close to the experimental value. Throughout this review, we therefore quote values for and in the “canonical” units of , and for the dipole spin polarisabilities discussed below, in units of .
We also take the term “polarisabilities” to be the “Compton polarisabilities”, defined in parallel to the above intuitive description by a multipole expansion of the Compton amplitudes, as elaborated in Section 2.2. In nonrelativistic Quantum Mechanics, they can be related to the spectrum of nucleon excited states with energies , e.g.:
where is the electric dipole operator. (In Eq. (1.4) we have not explicitly written subtle but important corrections beyond non-relativistic second-order perturbation theory, which were emphasised by L’vov [11], and Bawin and Coon [12], and summarised by Schumacher [10].) Therefore, and are strongly influenced by the lowest state with the quantum numbers of an electric or magnetic dipole excitation of the nucleon. For , this is indeed the state. The excitation of the would appear to provide a sizable paramagnetic contribution to of order 10, but since the experimentally measured value is about an order of magnitude smaller, a diamagnetic contribution of similar magnitude but opposite sign exists whose precise nature is not yet determined.
Such excitations also set the energies at which the polarisabilities are manifest in the single-nucleon cross section: . In addition, Eq. (1.4) implies that the dynamical polarisabilities become complex once the first inelastic channel opens at the N threshold. Finally, polarisabilities are in general related to the dielectric function and magnetic permeability function of a macroscopic system. Since these, in turn, characterise optical properties, nucleon polarisabilities are related to the index of refraction and absorption coefficient of a bulk system of nucleons at a given frequency.
Though polarisabilities are naturally defined as functions of the photon energy , historically much of the emphasis in the context of the proton and neutron has been on the static polarisabilities and , which are often simply termed “the polarisabilities”. For clarity, we therefore refer to the functions as energy-dependent or dynamical polarisabilities [13, 14]222For completeness, we note that the generalised polarisabilities of virtual Compton scattering are explored by an incoming photon of non-zero virtuality and can provide complementary information about the spatial charge and current distribution, see e.g. [15] for a review.. The static polarisabilities are formally and uniquely defined via the Compton scattering amplitudes, as detailed in Section 2.1. Along with the anomalous magnetic moment, they parameterise the deviation of the proton Compton cross section from that of a point-like particle in a low-energy expansion which is valid up to photon energies of roughly 80 MeV. Static polarisabilities can be conceptualised, again up to subtle corrections [11, 12, 10], as the proportionality constants between induced dipoles and external fields which would be “measured” were the nucleon placed into a parallel-plate capacitor or a pure N-S magnet. They also enter in other processes with sensitivity to nucleon structure; in particular, the relation of to doubly-virtual forward Compton scattering has attracted recent interest in connection with the two-photon-exchange contribution to the Lamb shift in muonic hydrogen [16] (see also Ref. [17] and references therein), and with the nucleon electromagnetic mass shift, see most recently [18].
The Effective Field Theory (EFT) methods which are discussed in this review predict both static and dynamical polarisabilities. They can be used to extract the values of static polarisabilities from experimental data taken at energies too high for the low-energy expansion to be valid. We present a new EFT extraction of polarisabilities from world data in Sections 4 and 5.
1.2 Compton scattering from nucleons: data, structure and analysis tools
Therefore, in this review, we examine real Compton scattering from the simplest stable strongly-interacting systems, namely protons and light nuclei, in order to obtain information on the photon-nucleon scattering amplitude. Compton scattering from larger nuclei is reviewed in Ref. [19]. In Section 2.2, we provide the detailed relation of , and the spin polarisabilities to the single-nucleon Compton amplitude. The spin polarisabilities have received much recent attention in both theoretical and experimental studies, since they are a low-energy manifestation of the spin structure of the nucleon, parameterising its spin-dependent response to external electric and magnetic fields.
How are polarisabilities explored? Until the last decades of the 20th century, all experiments investigating Compton scattering from nucleons and nuclei employed bremsstrahlung beams. This created difficulties in accurately measuring the small (nb/sr) cross sections at energies where the polarisabilities are particularly relevant. The advent of photon tagging in the 1980s facilitated a clean separation of elastic and inelastic processes, enabling measurements of the proton cross section with good energy resolution at MeV. At the turn of the Millennium, this led to a wealth of data for the proton from Illinois [20], Saskatoon [21, 22] and MAMI [23, 24], and for the deuteron from Illinois [25], Saskatoon [26] and Lund [27]. Since most of these experiments were reviewed by Schumacher [10], we only summarise the data in Section 3, with particular attention to statistical and systematic errors.
In parallel with these developments, and were calculated in various theoretical models of nucleon structure [28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38]. Comparing these predictions with Compton-scattering data indicates how accurately these models describe electromagnetic excitations of the nucleon. Quark models which do not incorporate explicit pionic degrees of freedom tend to underpredict and overpredict ; see e.g. Ref. [33]. Computations of and in chiral quark models incorporate both long-distance () and short-distance (other excitations) physics [30, 39, 40]. Direct determinations of nucleon polarisabilities from lattice simulations of the QCD path integral now appear imminent [41], with results reported in quenched [42, 43, 44], partially quenched [45, 46, 49, 50] and even full QCD [47, 48].
New plans for Compton-scattering experiments on protons and light nuclei at MAMI, the HIS facility at TUNL and MAX-Lab in Lund make this an opportune time to re-examine our knowledge of nucleon Compton scattering at energies up to a few hundred MeV. We therefore delineate in this review what is known about the Compton amplitudes of the proton and neutron. Equation (1.4) implies that the polarisabilities are dominated by the lowest nucleonic states, namely by N and dynamics, while sensitivity to higher excitations is suppressed. This means that Compton scattering at low energies, MeV, is dominated by long-distance properties of the nucleon. In particular, we note that the particles detected in the experiments are photons, nucleons and pions, not quarks themselves. Analysing Compton scattering at these energies in terms of quark degrees of freedom, such as in lattice QCD or models of nucleon structure, is thus not really profitable. Instead, such calculations can be tested against the constraints extracted from data using a theoretical approach that includes the pertinent low-energy dynamics, provided that it is sufficiently general to encompass the data without undue prejudice. Effective Field Theory (EFT) fits these requirements.
1.3 The role of Effective Field Theory
The basic principle of an EFT is that many phenomena can be economically—i.e. effectively—described in terms of entities which are not elementary. The fact that the details of nucleon structure are not probed at low energies suggests that a low-energy EFT which includes nucleons, pions and photons should be a useful tool to extract information on nucleon polarisabilities.
In general, for an EFT approach to be successful, a separation of scales must exist between the energies involved and those required to excite the particular degrees of freedom of the system which are not treated dynamically. A famous example of an EFT is the Fermi theory of weak interactions, in which decay is described by a contact interaction between the neutron and its decay products , and p. At energy scales of the order of a few MeV, the threshold for production of W and Z bosons is far off, and they can be “integrated out” to leave a simple energy-independent four-fermion interaction together with a series of further interactions, each of which is suppressed by powers of the small quantity , where is the typical momentum of a decay product. Similarly, at low enough energies—energies below those where pion degrees of freedom become relevant—a theory of heavy, point-like nucleons should suffice to describe the interactions of nucleons with one another and with photons. This theory has come to be known as the “Pionless Effective Field Theory” (EFT()) of Nuclear Physics, and for NN scattering, it is equivalent to Bethe’s low-energy Effective Range Expansion [51, 52, 53, 54, 55, 56, 57, 58, 59]. As in all EFTs, “point-like” and “pionless” does not imply the absence of effects such as a non-zero anomalous magnetic moment which are due to nucleon structure, pions and heavier particles. Instead, they are taken into account through Lagrangian parameters—so-called low-energy constants (LECs)—but are not explained within the EFT itself.
The guiding principle in constructing an EFT Lagrangian is that all terms compatible with the symmetries of the underlying theory must be included, each proportional to an a priori unknown LEC. This infinite string of terms and couplings is organised according to the power of that each operator contributes to amplitudes, with the typical momentum of the process, and the breakdown scale set by the mass of the lightest omitted degree of freedom (, for example, in the case of EFT()). Since all terms are included, the counterterms needed to renormalise the divergent loops at a given order in are automatically present. Unless the theory has a low-energy bound state, the renormalised loop contributions are suppressed, typically by for each loop. Therefore, only a finite number of terms in the Lagrangian need to be considered when working to a particular order in the small, dimensionless parameter . While the theory is thus not renormalisable in the conventional sense, only a finite and usually small number of terms are needed for renormalisability to a given order in . In addition, a rigorous assessment of residual theoretical uncertainties can be made for any process by estimating the accuracy of its momentum expansion.
Though the number of possible terms (and hence the number of LECs) in the Lagrangian grows rapidly with the order, any given process usually involves only a few of them. Once determined by one piece of data, an LEC enters in the prediction of other observables. Some LECs are related to familiar properties of the particles involved, like charge, mass, anomalous magnetic moment, decay constant, etc., but others are less easy to fix and interpret. Some LECs which govern pion interactions are now being computed by direct lattice simulations of QCD [60, 61, 62]. But even when LECs can be derived from the underlying theory, calculations of more complex processes are often more tractable in the EFT framework, as we shall see here for Compton scattering from protons and light nuclei.
Compton scattering in EFT() is discussed in Section 5.4.1. For the proton, it is of limited use, as it amounts to an expansion of the amplitude in powers of . Since only a small fraction of the data is in the regime where this expansion is valid, the resulting errors on the polarisabilities are inevitably large. In the two-nucleon sector, though, the situation is different. The low-energy NN scattering amplitude is given as an expansion in powers of momenta, with the LECs determined by the scattering length, effective range, etc. Nuclear binding effects and photon-nuclear interactions are then fixed and the leading deuteron Compton amplitudes are predicted with no free parameters, as demonstrated in 1935 in a calculation by Bethe and Peierls [4].
The EFT() expansion breaks down for typical momenta of order , and above this point, the pion itself must be included. While an EFT including the pion must exist because the next excitation, the , is far enough away to provide a separation of scales, there is no guarantee that it is viable. The size of the pion-nucleon coupling constant, , suggests that multi-loop diagrams including dynamical pions may not be suppressed. However, two crucial aspects make the theory manageable. First, the fact that the pion is much lighter than other hadrons is now understood as a consequence of the spontaneously broken (hidden) chiral symmetry of QCD. The up and down quarks are nearly massless on the scale of typical QCD energies. If they were actually massless, the Lagrangian would be invariant under independent isospin rotations of the left- and right-handed quarks, . However, only the vector (isospin) subgroup, , is manifest in the hadron spectrum and the full symmetry is hidden in the physical QCD vacuum. The pions are then identified as the three Nambu-Goldstone bosons corresponding to the three axial rotations which are symmetry operations on the QCD Lagrangian, but not on the QCD vacuum. The small non-zero quark masses lead to pion masses which are again much smaller than typical QCD scales, . The fact that the pion is a (pseudo-)Nambu-Goldstone boson leads to the second key point: in the “chiral” or zero-quark-mass limit, soft pions decouple so that their interactions with one another and with other hadrons vanish linearly with their momentum.
This provides a perturbative expansion of amplitudes in powers of , with the light EFT scales being the pion momenta and mass, and being the scale associated with hadrons which are not explicitly included in the EFT, . This EFT is known as “chiral EFT” (EFT). The version without explicit degrees of freedom is often referred to as “Baryon Chiral Perturbation Theory” (BPT), and the purely pionic one as Chiral Perturbation Theory (PT). Details pertaining to Compton scattering are discussed in Section 4.2, including the special role of the . There we also show that photons are included in EFT partly by invoking minimal substitution and partly by including the field strength tensor as a building block in the Lagrangian. Since the latter generates photon couplings which are not constrained by gauge invariance, new LECs enter, including ones which can (at successively higher orders) be related to the anomalous magnetic moment and the charge radius of the nucleon, and to non-chiral contributions to polarisabilities.
The resulting EFT framework provides predictions for the low-energy interaction of pions, photons and nucleons through calculations consistent with the known pattern of QCD symmetries and their breaking. Furthermore, because it is a quantum field theory, EFT incorporates the requisite consequences of unitarity and Lorentz invariance at low energies. It is this framework which we use to analyse proton Compton scattering in Sections 4.2 and 4.4. In Sections 4.1, 4.3 and 4.5, we also compare EFT to other approaches for analysing low-energy Compton scattering, most notably dispersion relations (DRs). (Full details of DRs are given in the review of Drechsel et al. [15].) The main differences between EFT and DRs lie in the careful incorporation of chiral constraints in the former, its stringent agnosticism regarding high-energy details, and the fact that the presence of a small parameter allows an a priori estimate of residual theoretical uncertainties.
1.4 The elusive neutron: Compton scattering from light nuclei
Experiments on the proton reveal only half of the information in Compton scattering on the nucleon. The 14.7 minute lifetime of the neutron, coupled with the relative weakness of its electromagnetic interactions, means that the neutron Compton amplitude must be inferred indirectly. While some results exist for scattering neutrons directly in the Coulomb field of heavy nuclei, more accurate data are available for Compton scattering in few-nucleon systems, where nuclear effects can be precisely calculated and taken into account in the analysis. However, it is clearly advantageous to use the same theoretical framework for both the nuclear and photon-nucleon dynamics. The EFT low-momentum expansion again provides a controlled, model-independent framework for subtracting the nuclear binding effects and analysing the available elastic deuteron scattering data. Both the number and quality of these data, reviewed in Section 3.4, are appreciably inferior to the proton case, since the experiments are markedly harder. The substantial progress made to determine the neutron polarisabilities via this route is reviewed in Section 5. The key to extracting neutron polarisabilities from few-nucleon targets is that the coherent nature of deuteron Compton scattering allows us to observe the photon-neutron amplitude through its interference with the proton and meson-exchange amplitudes, while the Thomson limit imposes a stringent constraint on the few-nucleon amplitude that can be used to check these non-trivial calculations.
EFT work on the deuteron originates from Weinberg’s seminal papers [63], where the nucleon-nucleon potential is computed up to a fixed order in the momentum expansion and then iterated using the Schrödinger equation to obtain the scattering amplitude. The resulting wave functions are combined with operators for Compton scattering derived in EFT. The photon-nucleon operators are given by the EFT photon-nucleon amplitudes described above. However, EFT also expands the nuclear current operators in powers of . This has the crucial advantage that the chiral dynamics which drives low-energy Compton scattering is treated consistently in both the single- and few-nucleon operators, so that EFT results for deuteron Compton scattering can be assessed for order-by-order convergence, too. Meanwhile, Compton scattering from the deuteron in quasi-free kinematics for the neutron was measured in Refs. [64, 65, 66, 67, 68], and values for were extracted. These experiments are discussed in Section 3.5, but EFT has not yet been extended to . Model calculations of elastic and inelastic scattering on the deuteron are briefly reviewed in Section 5.5. EFT is also used to produce the first calculations, in any framework, of elastic scattering on He [69, 70, 71]; see Section 5.6. Since He is doubly charged, its cross section is larger by a factor of up to compared to the proton or deuteron. In addition, polarised He is interesting since it serves as an effective polarised neutron target.
1.5 Results and the future
In Sections 4.4 and 5.3, we apply the EFT methodology to the proton and deuteron databases, respectively, and present new central values and uncertainties for the proton and neutron dipole polarisabilities, as well as a detailed comparison of the EFT predictions with data. Here we preview our best values of the proton and isoscalar (i.e. average nucleon) scalar dipole polarisabilities in a two-parameter fit:
In a one-parameter fit employing the Baldin sum rules for the proton and isoscalar nucleon:
In Section 6, we close this review with a discussion of future experiments which hold the promise of improving the Compton database and the determination of polarisabilities. We also describe upcoming experimental efforts to extract the still relatively unexplored spin polarisabilities. Finally, we outline the anticipated EFT developments that will help refine the analysis of these forthcoming data.
2 Foundations
2.1 Overview
We begin by parameterising the -matrix for Compton scattering of a photon of incoming energy from a nucleon with spin by six independent invariant amplitudes [72], which read in the operator basis first appearing in Ref. [73] and usually employed in EFT:
where () is the unit vector in the momentum direction of the incoming (outgoing) photon with polarisation (), is the scattering angle, and . This form holds in the Breit and centre-of-mass (cm) frames. The first two amplitudes are spin-independent, while the other four parameterise interactions with the nucleon spin. The amplitude is related to the differential cross section by
where is a frame-dependent flux factor which tends to at low energies; see also Section 2.3.
For forward scattering, , and only the structures of and survive. The behaviour of these amplitudes as is determined by low-energy theorems (LETs) which rely only on analyticity and gauge, Lorentz, parity and time-reversal invariance [7, 8]
where is the nucleon charge, is the anomalous magnetic moment (in nuclear magnetons) and is the mass of the target. At , we recover the spin- and energy-independent Thomson term, and hence the corresponding cross section of Eq. (1.1).
Eq. (2.3) may also be obtained in the limit from the pole diagrams in a calculation of Compton scattering using photons coupled to a Dirac nucleon via its charge and anomalous magnetic moment (with the spinors normalised to ), as shown in diagram (a) of Fig. 2.1.
(a) Nucleon Born graphs for Dirac nucleons; (b) pion Born graph;
(c) structure contribution.
Figure 2.1: (a) Nucleon Born graphs for Dirac nucleons; (b) pion Born graph; (c) structure contribution.
As explained in the Introduction, our primary interest is to obtain information on the structure of the two-photon response of the nucleon, which requires going beyond the single-photon processes of the Born terms. It is therefore useful to separate the amplitudes into “non-structure” and “structure” parts
Identifying the former with the pole (Born) contribution mentioned above, while not a unique choice, ensures that the LETs are satisfied by these terms alone. It also agrees with usage in dispersion relation (DR) calculations (see Section 4.1). There is another known contribution to spin-dependent scattering which comes from the -channel exchange of a neutral pion, as shown in Fig. 2.1(b), which first contributes at ; we choose to include this term in the Born or “non-structure” part as well, but the choice is not universal. The other term, , represented by Fig. 2.1(c), parameterises deviations from the “known” part of the amplitude and describes the nucleon as a polarisable object.
Expanding both nucleon Born and structure amplitudes in the Breit frame to order , we have
where the pole amplitudes are, with and the third Pauli matrix in isospin space:
To this order, the only difference in the cm frame is that has an additional Born term . The omitted terms start one order lower in the cm frame (e.g. in ), because the amplitude does not have manifest crossing symmetry, whereas in the Breit frame it does. The “structure” parts of the amplitude are represented at this order by and , namely the static spin-independent polarisabilities discussed above, and by the analogous spin polarisabilities [74]; see Section 2.2. The canonical units are for the former and for the latter. We denote proton and neutron polarisabilities by a superscript (e.g. ) and define isoscalar and isovector polarisabilities as averages and differences, such that and .
For forward scattering, only and augment the LETs of Eq. (2.3), while the pole terms do not contribute. Since forward scattering amplitudes can be related to inelastic cross sections via the optical theorem, two sum rules can be constructed for these quantities:
where is the total cross section for an unpolarised target and () is the cross section for a photon-nucleon system of total helicity 1/2 (3/2). The integrals are over lab energies, starting at the pion photoproduction threshold. The first of these is known as the Baldin Sum Rule [75, 76]. We adopt the evaluation of Olmos de Leónet al. [24] for the proton and that of Levchuk and L’vov [77] for the neutron:
which combine to give the isoscalar sum-rule value, with errors added in quadrature:
Using information on pion photoproduction multipoles, together with a parameterisation of data at intermediate energies and a Regge form at higher energies, Babusci et al. [78] obtained . However, using different parameterisations for , Levchuk and L’vov [77] obtained a central value of 14.0. The evaluation quoted above is more recent and more conservative [24]. Given the uncertainties associated with extracting neutron multipoles from deuterium data, perhaps it is not surprising that evaluations for the neutron are more variable. We regard the direct use of deuterium photodisintegration data above pion threshold as ill-advised (c.f. Ref. [78]). The value we quote uses neutron photoproduction multipoles obtained from proton ones via isospin considerations in a more reliable approach, although significant model dependence is still present [77].
Only the combinations and enter for backward scattering. Because we have separated out the pion-pole contribution, our values for differ from those obtained when it is included. The difference is , using the proton value [79].
Various low-energy cross sections have been constructed in the literature. All of these can be found by using various approximations to Eq. (2.5) in Eqs. (2.1) and (2.2). The simplest is the Klein-Nishina formula of a point-like Dirac particle, with [80]. If is included but the target is still structureless and terms of order are discarded, one obtains the Powell cross section [81]. The Petrun’kin cross section additionally allows the inclusion of terms of order arising from the interference of the leading structure contributions and with the Thomson term in and hence is complete at [82, 83]. None of these includes spin polarisabilities, the pole, or the energy dependence of the polarisabilities. We shall say more about their impact in the next section.
2.2 Polarisabilities from a multipole expansion
Many multipoles are induced inside an object that interacts with an electromagnetic field of frequency . Each oscillates with that same frequency and thus emits radiation with a characteristic angular distribution. The proportionality constant between each photon field and the corresponding induced multipole moment is called a polarisability; each is an energy-dependent function which parameterises the stiffness of the internal degrees of freedom with particular quantum numbers against deformations of a given electric or magnetic multipolarity and energy. In this section, we generalise the picture presented in the Introduction to consider polarisabilities beyond the scalar dipole ones.
Hildebrandt et al. [13, 14] used the formalism of an energy-dependent multipole analysis established by Ritus [84, 85, 86] and summarised in Ref. [87] to define energy-dependent polarisabilities. Here we proceed differently, constructing the first few multipoles via the most general field-theoretical Lagrangian which describes the interactions between a nucleon field with spin and two photons of fixed, non-zero energy and definite multipolarities. This includes the structure effects, i.e. the local coupling of the two photons to the nucleon. Taking into account gauge and Lorentz invariance, as well as invariance under parity and time-reversal, the interactions with the lowest photon multipolarities are
with , . These terms are straightforward extensions to the effective Lagrangian of zero-energy scattering in Refs. [88, 89]. The photons couple electrically or magnetically () and undergo transitions of definite multipolarities and . The interactions are unique up to field redefinitions using the equations of motion. Dipole couplings are proportional to the electric and magnetic field directly, or to their time derivatives. Quadrupole interactions couple to the irreducible second-rank tensors and . Eq. (2.2) lists all contributions with coupling to at least one dipole field. Polarisabilities of higher multipolarity, e.g. the electric and magnetic quadrupole polarisabilities [88, 89], are denoted by ellipses. Thus far, such terms are not relevant for Compton scattering below [14, 90, 91, 92, 93].
The two-photon response of the nucleon in the dipole approximation is therefore characterised by the six linearly independent, energy-dependent polarisabilities of Eq. (2.2). The spin-independent terms are parameterised by the two scalar functions already encountered in Section 1.1: the electric dipole polarisability , and the magnetic dipole polarisability . The four spin polarisabilities parameterise the response of the nucleon spin to an external field. The two corresponding to dipole-dipole transitions, and , are analogous to the classical Faraday effect related to birefringence inside the nucleon [94]. They describe how an incoming photon causes a dipole deformation in the nucleon spin, which in turn leads to dipole radiation. The two mixed spin polarisabilities, and , encode scattering where the angular momenta of the incident and outgoing photons differ by one unit. In principle, the polarisabilities can be defined in any coordinate system in which the initial and final photon energies are identical. In practice, the centre-of-mass frame is usually used.333The dynamical polarisabilities introduced here via Eq. (2.2) differ from those given in Refs. [13, 14] by a factor of . The polarisabilities are linear combinations of the multipole moments of the Compton amplitudes; the details, including the relevant projection formulae, are given in Ref. [14].
We can now translate the interactions (2.2) into contributions to the structure amplitudes :
where the dots refer to omitted higher multipoles. The relations between the static polarisabilities of the multipole expansion and those of Ragusa [74] defined in Eq. (2.5) are
The first of the photon multipolarities in the subscripts of the spin polarisabilities is sometimes dropped in the literature (so etc).
We reiterate that polarisabilities are identified by a multipole analysis at fixed energy, i.e. only by the angular and spin dependence of the amplitudes. Eq. (2.11) emphasises that the complete set of energy-dependent polarisabilities does not contain more or less information than the untruncated Compton amplitudes . However, the information is more accessible, since any hadronic mechanism and interaction leaves a characteristic signature in a particular multipole polarisability, as discussed in the Introduction. Thus, when the multipole expansion is truncated after the dipole terms, determining the six energy-dependent dipole polarisabilities is reduced, in principle, to an energy-dependent multipole analysis of the Compton scattering database. Such proof-of-principle results were reported in [95, 96, 92] but suffer at present from rather large error bars.
At very low energies, the functions encoding the dynamical polarisabilities can be approximated by their zero-energy values and therefore some experiments (most recently Ref. [20]) have used the Petrun’kin formula to extract the static polarisabilities and directly from their data. Indeed, the low-energy expansion of the cross section could conceivably be extended to higher powers in . At fourth order, not only do the spin polarisabilities enter, but also the next terms (slope parameters) in the expansion of and :
Static values of the next multipoles, the scalar quadrupole polarisabilities, also enter at . (Contributions to at and were discussed by Babusci et al. [88] and Holstein et al. [89].) However, the convergence is governed by the pion-production threshold (the first non-analyticity) and the expansion is thus in powers of . With the slope correction to the scalar polarisabilities of size , this leads to a correction of about for photon energies as low as . The expansion is therefore useless where most high-accuracy data are taken.
Consequently, modern extractions of the static polarisabilities choose a different route. It is assumed that the energy dependence of the polarisabilities, and hence that of the amplitudes , is adequately captured in some framework (e.g. dispersion relations, EFT). With the long-range part of the interactions thus fixed, one fits up to six low-energy constants which encode the short-distance dynamics and thereby determines the static polarisabilities from data. This is the approach taken in this review.
2.3 Kinematics
Consider Compton scattering on a nucleus with mass , , with and ( and ) the kinetic energy and momentum of the target before (after) the reaction. So far, denoted a generic photon momentum and a generic scattering angle. We now add subscripts to differentiate between the centre-of-mass (cm), laboratory (lab) and Breit frames, using relativistic kinematics throughout. The coordinate axes are specified as follows: the incident photon beam direction defines the -axis, and the -plane is the scattering plane, with the -axis perpendicular to it.
In the centre-of-mass (cm) frame, the total energy is the square-root of the Mandelstam variable ,
On the other hand, experiments are performed in the lab frame, where the incident and outgoing photon energy are related to one another by a recoil correction, and to , as follows:
The scattering angle transforms as
where is the relative velocity between the cm and lab frames. The frame-dependent flux (phase-space) factor for cross sections (see Eq. (2.2)) is
It is also useful to introduce the Mandelstam variables and :
where we have also listed variables in the Breit frame, with
For , i.e. small photon energy or large target mass, the three coordinate frames coincide. The Breit and lab frames coincide for , and the Breit and cm frames for .
In the Breit or “brick-wall” frame, the photon transfers no energy and the target recoils with the magnitude of its momentum unchanged but its direction exactly reversed, . This has the advantage that the Compton amplitude is manifestly crossing-symmetric, i.e. invariant under the interchange of initial and final states: , . By inspection of Eq. (2.1), the spin-independent amplitudes are even in , and the spin-dependent ones, , are odd.
2.4 Observables
We can now relate the six independent amplitudes of Eq. (2.1) to scattering observables. A complete classification of unpolarised, single- and double-polarisation observables for a spin- target (nucleon or He) by Babusci et al. [88] demonstrated that experiments in which up to two polarisations are fixed can be described by four independent observables below the first threshold and four more observables above it. Here, we list some of the combinations which either have been or will soon be explored experimentally or theoretically: unpolarised, linearly or circularly polarised beams and unpolarised or polarised targets, without the detection of final-state polarisations. We define them as they are measured, and—with the exception of the differential cross section—refer to the literature for formulae which relate them to the Compton amplitudes . Observables with a vector-polarised nucleus can be understood by replacing the spin-polarised nucleon by the polarised nucleus. These will be discussed in Section 6.1.
In order to take into account nuclear binding for the deuteron and He, it is convenient to use a helicity basis for these targets:
where is the circular polarisation of the initial/final photon, and is the magnetic quantum number of the initial/final target spin, i.e. for the deuteron and for He. A target of spin has (in and out state) amplitudes, but parity and time reversal leave only independent components: spin- helicity amplitudes become independent ones, ; helicity amplitudes for spin- reduce to the of Eq. (2.1); and amplitudes for spin- yield independent structures, constructed e.g. by Chen et al. [97].
Unpolarised beam and target:
In this case, the only observable is the differential cross section. For the nucleon, it is obtained from Eq. (2.2) after averaging over the initial target spins and photon polarisations and summing over the final states; one finds in the basis of Eq. (2.1) (see e.g. Refs. [98, Chapter IV.2] and [92, Chapter 4.1]):
In the helicity basis, the unpolarised differential cross section is built with the flux factors of Eq. (2.17) by summing over all combinations of incident and outgoing quantum numbers and including a symmetry factor from averaging over the initial target and photon polarisations:
Polarised beam, unpolarised target:
The cross section with a circularly polarised photon beam of arbitrary helicity is half of the unpolarised one and thus provides no additional information. With a linearly polarised beam, two observables can be constructed, see Fig. 2.2, with corresponding experiments approved at HI[99, 100].
(Colour online) Observables for linearly polarised photon
incident on unpolarised target.
Figure 2.2: (Colour online) Observables for linearly polarised photon incident on unpolarised target.
The cross section for an incoming photon is denoted by for an incoming photon polarised parallel (perpendicular) to the scattering plane. Their sum gives the unpolarised cross section, and the difference,
is the beam asymmetry444The subscript is omitted in Ref. [101]; the symbol used in Ref. [93] is .. Its relation to the deuteron helicity basis is given in Refs. [71, 101, 93].
For the forward and backward spin polarisabilities, and , we already saw that the multipole expansion is a convenient tool to identify configurations in which a specific polarisability is isolated or suppressed. Such configurations were found for the proton by Maximon [102]. The interaction in Eq. (2.2) parameterised by vanishes when the polarisations of the incoming and outgoing photons are orthogonal, e.g. when a photon which is linearly polarised in the scattering plane scatters at as in , see Fig. 2.3.
(Colour online) Left: Configuration for
which an induced electric dipole cannot radiate an (Colour online) Left: Configuration for
which an induced electric dipole cannot radiate an
Figure 2.3: (Colour online) Left: Configuration for which an induced electric dipole cannot radiate an photon to an observer (“eye”) at ; right: same for radiating an photon.
In that case, since the incoming magnetic field is orthogonal to the scattering plane, the induced magnetic dipole radiates most strongly at , providing maximal sensitivity to . Similarly, is independent of but maximally sensitive to . When the nucleon is embedded in a nucleus, the relative motion of the N system may complicate this analysis.
Polarised beam, polarised target:
Figure 2.4 depicts double-polarisation observables with circularly polarised photons, as an example of observables that will be explored in the future; see Section 6.2.
(Colour online) Observables for circularly polarised photon
incident on polarised target.
Figure 2.4: (Colour online) Observables for circularly polarised photon incident on polarised target.
The target can be polarised in the scattering plane along or along the beam direction, . Cross-section differences can be defined by flipping the target polarisation:
The first arrow of the subscript denotes a positive beam helicity, the second the target polarisation. compares a target polarised along vs. , i.e. perpendicular to the beam direction but in the scattering plane. Similarly, is the difference with the target polarised parallel vs. anti-parallel to the beam helicity555In Ref. [97], is denoted by .. Both observables change sign for left-circularly polarised photons (negative beam helicity).
Polarisation asymmetries are defined as the ratio of the cross-section differences to their sums666 is denoted by in Ref. [88] and in Refs. [101, 97].:
Except for for a spin- target, the denominators are not the unpolarised cross sections [71, 101, 92]. Normalising to sums of cross sections removes many experimental systematic uncertainties and the frame dependence associated with the flux factors . However, a small spin-averaged cross section in the denominator may enhance theoretical uncertainties or hide unfeasibly small count rates: cross-section differences set the scale for the beamtime necessary to perform these experiments. The asymmetries vanish in the static limit only for a proton target, but are nonzero for the neutron.
The relations of the double-polarisation observables (with arbitrary angle between polarisation and scattering planes) to the amplitudes are compiled in Ref. [92, Chapter 4.1] for complex amplitudes. For real amplitudes, i.e. below threshold, the relations for and were first reported in Ref. [98, Chapter IV.2]. Those for the helicity amplitudes are compiled for the deuteron in [101, 93] and for He in [70, 71]. Similarly, double-polarisation observables with linearly polarised photons can also be defined— see e.g. [88, 92, 93] for definitions and figures analogous to the ones above.
3 Experimental overview
In this section, we review the experimental efforts on Compton scattering using proton and deuteron targets, spanning the past half-century. For the proton, we have divided the discussion into low-energy measurements (below pion threshold) and high-energy measurements (above pion threshold). There is a relatively clear distinction between experiments in these two energy regions, although some cases do overlap with both regions. There are also some polarised measurements on the proton in the modern era. For the deuteron, there are again two categories—elastic Compton-scattering experiments and quasi-free measurements in which deuteron breakup is exploited to study the neutron explicitly in quasi-free kinematics (with the proton acting as a spectator).
Special emphasis will be placed on enumerating the statistical and systematic uncertainties of the proton and deuteron experiments, since these issues figure prominently in the fitting of the various data sets using the EFT formalism. Somewhat cursory details are given about the experiments: many are discussed at greater length in the review by Schumacher [10].
3.1 Low-energy proton Compton scattering
The earliest low-energy Compton-scattering experiments on the proton (up to about MeV) were reported in the mid-to-late 1950s by Pugh et al. [103], Oxley [104], Hyman et al. [105], Bernardini et al. [106] and Goldansky et al. [107]. These early experiments were not aimed at measuring the electromagnetic polarisabilities of the proton, as we think of them today. In fact, these experiments were more motivated by their ability to test recently developed dispersion-theory calculations of Gell-Mann and Goldberger and others. Nevertheless, it is noteworthy that in some of these early papers (most notably Goldansky et al. [107]), attempts were made to extract the proton polarisability.
These early experiments were pioneering efforts, given the difficulty of working with continuous bremsstrahlung photon beams and detector systems with very poor energy resolution. Large NaI photon detectors with good energy resolution were not yet available, and photon-tagging facilities were still decades in the future. Normalising the photon flux to obtain an absolute cross section is notoriously difficult with bremsstrahlung beams, not to mention that the incident continuous bremsstrahlung beam itself has no well-defined energy resolution. For example, the experiment of Oxley [104] had a central photon energy of 60 MeV, with a full width of 55 MeV—so it is remarkable that this experiment produced results for the cross section between 10.6 and 14.7 nb/sr from 70 to 150, which is generally consistent with modern results.
Directed efforts were made in the experiment of Goldansky et al. [107] in 1960 to determine the proton polarisability. The same group continued these efforts many years later in the mid-1970s in the experiment of Baranov et al. [108, 109]. While uncertainties in the extracted polarisabilities were reduced by a factor of two in the later experiment, the experimental techniques during that period were still relatively crude compared to today. For this reason, it is reasonable to consider the data prior to 1980 to be exploratory in nature, and to not give very much credence to the absolute scale of the cross sections from those experiments.
Almost 20 years passed before the next proton Compton-scattering experiment was attempted. Two major developments in experimental techniques emerged in that period to make the new generation of Compton measurements considerably more reliable than their predecessors. First, the method of photon tagging was introduced, providing both a mono-energetic beam of photons and a means of normalising the photon flux by direct counting of the post-bremsstrahlung electrons. This revolutionised many photonuclear experiments. Second, new large-volume high-resolution NaI detectors (25.4 cm diameter 25.4 cm long, and even larger) were available with a resolution for photons of 50–100 MeV. These two improvements in the beam and the detectors changed the game significantly, paving the way for a new era of Compton-scattering experiments designed specifically to pin down the proton polarisability.
The major experiments that have contributed in the modern era to the determination of the proton polarisability are those of Federspiel et al. [20], Zieger et al. [23], Hallin et al. [21], MacGibbon et al. [22], and Olmos de León et al. [24] covering the 10-year period between 1991 and 2001. All but two of them used tagged-photon beams of energy 165 MeV—the exceptions are Hallin, who used a continuous bremsstrahlung beam with energies up to 289 MeV, and Zieger, who used a bremsstrahlung beam with a proton-recoil detection technique. It is worth examining all of these low-energy experiments in more detail to elucidate their relative merits and potential weaknesses.
The first of the modern experiments was conducted by Federspiel et al. [20] at the tagged-photon facility at Illinois. Tagged photons in the energy range |
51a80648820e31a9 | Philosophy of dynamic development
Encyclopaedia of relations and characters,
their evolution and history
Part I: Natural laws
Marinus Dirk Stafleu
©2018, 2019 M.D.Stafleu
1. Contours of a Christian philosophy of dynamic development
2. Sets
3. Symmetry
4. Periodic motion
5. Physical characters
6. Organic characters
7. Inventory of behaviour characters
Encyclopaedia of relations and characters,
their evolution and history
Chapter 1
Contours of a Christian philosophy
of dynamic development
1.1.The idea of law
1.2. Relations
1.3. Characters and character types
1.4. Interlacement of characters
1.5. Natural evolution and cultural history
Encyclopaedia of relations and characters. 1. Contours of a Christian philosophy of dynamic development
1.1. The idea of law
The introductory chapter 1 of the Encyclopaedia of relations and characters sketches some contours of a Christian philosophy of dynamic development, intended to be a twenty-first-century update of Herman Dooyeweerd’s and Dirk Vollenhoven’s philosophy of the cosmonomic idea.[1] Its religious starting point is the confession that God created and sustains the world according to natural laws and normative principles. Besides the idea of law (1.1), the ideas of relations (1.2) and of characters (1.3) and their interlacements (1.4) will be systematically investigated.
This encyclopaedia is not alphabetically ordered, but follows the successive modal aspects as developed by Vollenhoven and Dooyeweerd and amended by the present author.
The idea of law (originally ‘wetsidee’ in Dutch, translated by Dooyeweerd in 1953 as ‘cosmonomic idea’) is the realist religious view confessing that God created the world developing according to laws and values which are invariable because He sustains them. Christians know God through Jesus Christ, who submitted himself to the Torah, the Law of God. The idea of natural law as used in the physical sciences since the seventeenth century confirms this idea of law. Natural laws are not a priori given, but partial knowledge thereof can be achieved by studying the law conformity of the creation, which, in contrast to the eternal God, is in every respect temporal, in a perennial state of dynamic development, ensuring an open future.
The modern idea of law arose together with the rise of modern science.[2] The idea that invariant laws govern nature is relatively new. The rise of science in the seventeenth century implied the end of Aristotelian philosophy, having dominated the European universities since the thirteenth century. According to Aristotle, four causes, form, matter, potentiality, and actuality determine the essence of a thing and the way it changes naturally. Each thing, plant, or animal has the potential to realise its destiny, if not prohibited by the circumstances. The aim of medieval science was to establish the essence or nature of things, plants and animals, their position in the cosmic order, and their use for humanity.
Although essentialism is still influential, since the seventeenth century it became replaced by the search for laws. The medieval distinction of positive law, given by people, from (mostly moral) natural law, ordained by God and revealed in the Bible, was hardly ever applied in science. In a scientific context, the word law was introduced about 1600.
Johann Kepler was the first to formulate laws as generalizations in the form of a mathematical relation. Apparently, Kepler’s first law (planets move in elliptical paths with the sun at their focus) does not differ very much from the view, accepted since Plato, that the orbits of the celestial bodies are circular, albeit with the earth at their centre. After all, both circles and ellipses are geometrical figures. But Plato put circular motion forward as being the essential form of celestial motion, not as a generalization from observations and calculations. From Hipparchus of Nicaea and Claudius Ptolemy up to Nicholas Copernicus, astronomers have tried to reconcile the observed motions with a combination of circular orbits. In his elaborate analysis of Tycho Brahe’s observations of the planet Mars during two decennia, Kepler found its orbit to be an ellipse, with the sun in a focus rather than at the centre. He assumed this could solve many problems for the other planets, too. Plato’s circular uniform motion was a rational hypothesis, imposed on the analysis of the observed facts. Kepler’s elliptical motion was a rational generalization of fairly accurate observations, a mathematical formulation of a newly discovered natural law.
Since antiquity, astronomers knew very well that planets as seen from the earth have variable speeds. They applied various tricks to adapt this observed fact to the Platonic idea of uniform circular motion. Kepler accepted changing velocities as a fact, and connected these to the planet’s varying distance to the sun as expressed in its elliptical path. He established a constant relation, his second law: as seen from the sun, a planet sweeps equal areas in equal times.
The introduction of the area law is the first instance of a method to become very successful in natural science. It related change to a constant, a magnitude that does not change. It means formulating several conservation laws, of energy, linear and angular momentum, of electric charge, etc. These laws impose restraints on any changes to occur.
During the seventeenth and eighteenth century natural laws were considered instruments of God’s government. This could be interpreted either in a rationalistic sense, when natural laws were considered both necessary and irrefutable, based on a priori principles (as with René Descartes or Immanuel Kant); or in a voluntaristic way, such that the world is as God willed it, but God could have made the world differently; or in an empiricist way: the laws are not irrefutable but can become known from empirical research (as with Isaac Newton, Robert Boyle, and John Locke).
Most classical physicists were faithful Christians, and many adhered to some variety of natural theology, assuming that God ordained the natural laws at the creation. At the end of the nineteenth century scientists started to take distance from this view, either because they became atheists, or because they asserted it to be theological or metaphysical, beyond the reach of physics. Therefore they avoided the metaphor of law, gradually replacing it by another expression of regularity. They never ceased to study regular patterns in nature. The word law remained in use mainly for the results of classical physics, in particular if expressed in a mathematical formula.
Realist scientists usually respond positively to the question of whether natural laws have an existence independent of humankind. Aimed at finding regularities, the empirical method is firmly rooted in the prevalent scientific worldview. Laws discovered in the laboratory are declared universal, holding for the whole universe at all times. Otherwise, theories of astrophysical or biological evolution cannot be taken seriously. With the purpose of studying the law-conformity of reality, science takes the existence of laws as a point of departure not to be proved. Natural laws are not invented but discovered.
In contrast, rationalist, positivist, as well as post-modern philosophers assert that natural laws are invented by scientists. Rationalists like René Descartes and Immanuel Kant assumed natural laws to be necessary products of human thought. Positivists like Ernst Mach considered natural laws to be logical-economic constructs, intended to create some order in the otherwise chaotic reality consisting entirely of observable phenomena. And post-modern philosophers hold that natural laws are social constructs, agreed upon by interested groups of scientists. These sceptics can neither explain the coherence of the natural sciences nor the successful application of natural laws in technology. Their opinions effectively maintain that scientists are free and autonomous law-givers even with respect to natural laws. They are at variance with naturalistic determinism, to be discussed presently, assuming that without exception everything is completely submitted to natural laws. Both contradict the realist view of laws being ordained by the Creator.
As long as natural laws were considered instruments of God’s government, law conformity was easily identified with causality. The laws were considered to be causes, with God as the first cause. Immanuel Kant and his followers were of the opinion that the principle of causality is nothing but the presupposition of law conformity of all natural phenomena.
Isaac Newton assumed that the natural laws were not sufficient to explain God’s interference with the creation. Without His help the solar system could not be stable. When a century later Pierre-Simon Laplace proved that all planetary movements known at the time satisfied Newton’s laws, the idea that God would correct the natural laws was pushed to the back stage of theological discussions about miracles. At present, causality is seen as a relation between events, one being the cause of the other, subject to laws. But a law itself is no longer considered a cause.
In the eighteenth and nineteenth century, natural laws were often identified with laws of force, interpreted in a deterministic way. Determinism is sometimes confused with causality (and with law conformity). In physics causality always implies some form of interaction, for instance in experimental situations: if you do this, that will happen.
In the seventeenth century physical causality was accepted without criticism. This changed with the publications of David Hume in the eighteenth century. He stated that any causal connection between two events is unprovable and possibly an illusion. He believed that causality follows from a psychological motive – the need of humans and animals to predict the effects of their behaviour, making decisions possible. Immanuel Kant tried to save the rationality of causality. He stated that causality, like space and time, is a necessary category of thought, necessary because people could not order their sensorial experience otherwise in a rational way.
Determinism assumed that nature itself is ruled by causality entirely.[3] Pierre-Simon Laplace asserted that we ought to regard the present state of the universe as the effect of its anterior state and as the cause of the one that is to follow.
Believing nature to be completely determined by unchangeable natural laws, determinism has always been an article of faith rather than a well-founded theory. In the twentieth century it is refuted by the discovery and analysis of radioactivity and by the development of quantum physics and chaos theory. Scientists agree that things and events are subject to laws leaving a margin of indeterminacy, contingency or chance, individuality and uniqueness. Still, the worldview of many people make them believe in determinism, contradicting scientific facts and methods.
Twentieth-century science has made clear that lawfulness and randomness coexist, as conditions for an open future. Many laws concern probabilities. Lawfulness does not imply determinism. It appears that laws allow of individual variation. Quantum physics, chaos theory, natural selection, and genetics cannot be understood without the assumption of random processes. Nevertheless, both law and individuality are absolutized in various worldviews. Determinism is upheld contrary to all evidence of random processes, in particular by naturalist science writers believing that everything can and must be reduced to material interactions. As far as applied to human acts, this reductionist determinism contradicts common sense and human responsibility. In contrast, some evolutionists believe that the biological evolution is a pure random process, not subject to any law. It seems difficult to accept that lawfulness and individuality do not exclude each other. In every respect, the dynamic development of reality has both a law side and a subject and object side, as will be discussed below.
A realistic view of natural laws not only implies their existence, but also the possibility to achieve knowledge of them.[4] It distinguishes the laws, which govern nature, being independent of mankind, from law statements as formulated by scientists. Newton’s law of gravity is a law statement having various alternatives, whereas the law of gravity is a natural law ruling the planetary motions and the fall of material bodies. The first is formulated by Newton and dates from the seventeenth century; the latter he discovered, but it presumably dates from the creation. Until the beginning of the twentieth century, Newton’s law statement was considered to be true, but since Albert Einstein’s general theory of relativity, it is considered approximately true. The Newtonian expression is sufficient to solve many problems, and is often preferred because of its relative simplicity. For a similar reason one may prefer Galileo’s law of fall, which Newton showed to be an approximation of his own statement of the law of gravity. Realists consider a law statement as true (or approximately true) if it is a reliable expression of the corresponding natural law. Positivists maintain that a law statement is true if it confirms to observable facts. Realists would call this a criterion for the truth of a law statement.
1.2. Relations
The view that anything is related to everything else is far less controversial than the idea of law, but as a philosophical theme it is equally important. The diversity of temporal reality cannot be reduced to a single principle of explanation. Like a prism refracts the light of the sun into a spectrum of colours, time refracts the unity and totality of reality into a wide variety of temporal relations: among things and events; among people; between people and their environment and all kinds of objects; between individuals and associations; and between associations among each other. Also the relations of people with their God display the same diversity.
These relations can be grouped into relation frames, in the philosophy of the cosmonomic idea called ‘law spheres’ or ‘modal aspects’. In each relation frame, all relations among subjects and objects are governed by one or more laws or principles, characterizing the relation frame concerned. Thus there are physical and mathematical relations, biological and logical, economic and social. The relation frames are supposed to be mutually irreducible, yet not independent. They show a recognizable serial order. For instance, genetic relations are based on physical interaction. Kinetic relations can be projected on spatial relations, and both can be projected on quantitative relations. Each relation frame presupposes the preceding ones (the spatial frame cannot exist without numbers) and deepens them (spatial continuity expands the denumerable set of rational numbers into the continuous set of real numbers).
Because nothing can exist isolated from everything else, the relation frames constitute conditions for the existence of anything. The relation frames are also aspects of human experience, because experience is always expressed in relations. If something changes, it occurs in relation to other things, its environment, for instance. As a consequence, the relation frames are aspects of being, change and experience.
This hypothesis views each relation frame as an aspect of time with its own temporal order. Simultaneity may be considered the spatial order of time, preceded by the quantitative order of earlier and later in a sequence, and succeeded by the kinetic order of uniform succession of temporal moments, the uniform motion from one temporal instant to another. In each relation frame the temporal order functions as a natural law or normative value for relations between subjects and objects, especially among subjects. The relation frames each contain a number of unchangeable natural laws or normative principles, determining the properties of relation networks of subjects and objects.
The temporal order is the law side of a relation frame. The corresponding relations constitute the subject and object side. Philosophically speaking, something is a subject if it is directly and actively subjected to a given law. An object is passively and indirectly (via a subject) subjected to a law. Therefore, whether something is a subject or an object depends on the context. A spatial subject like a triangle has a spatial position with respect to other spatial subjects, subjected to spatial laws. A biotic subject like a plant has a genetic relation to other biotic subjects, according to biotic laws. Something is a physical subject if it interacts with other physical things satisfying laws of physics and chemistry. With respect to a given law, something is an object if it has a function for a subject of that law. Properties of subjects are not subjects themselves (physical properties like mass do not interact), but objects. Hence, not only the subject-subject and subject-object relations, but even the concepts of a subject and of an object are relational.
The relations receive meaning from the temporal order. Serial order is a condition for quantity, and simultaneity for spatial relations. Periodic motions would be impossible without temporal uniformity. Irreversibility is a condition for causal relations; rejuvenation for life; and without purpose, the behaviour of animals would be meaningless.
Natural relations can be grouped together into six natural relation frames, preceding the normative relation frames to be discussed later.
First, putting things or events in a sequence produces a serial order. This order is expressed by numbering the members of the sequence. The sequential order of numbers gives rise to numerical differences and ratio’s, being quantitative subject-subject relations. The subjects of the laws belonging to the first relation frame are first of all the numbers themselves: natural and integral numbers, fractions or rational numbers and real numbers, all ordered on the same scale of increasing magnitude. Numbers are subject to laws of addition and multiplication. Everything in reality has a numerical aspect. Expressing some relation in quantitative terms (numbers or magnitudes) one arrives at an exact and objective representation. The numerical relation frame is a condition for the existence of the other frames.
The second relation frame concerns the spatial synchronous ordering of simultaneity. The relative position of two figures is the universal spatial relation between any two subjects, the spatial subject-subject relation. Whereas the serial order is one-dimensional, the spatial order consists of several mutually independent dimensions. In each dimension the positions are serially ordered and numbered, referring to the numerical. Relative to each of these dimensions, there are many equivalent positions. Independence and equivalence are spatial key concepts, just like the relation of a whole and its parts. The spatial relation frame returns in wave motion as a medium; in physical interactions as a field; in ecology as the environment; in animal psychology as observation space, such as an animal’s field of vision; and in human relations as the public domain. Magnitudes like length, distance, area, or volume, are spatial objects, having a quantitative function for spatial subjects.
The third relation frame records how things are moving and when events occur. Relative motion is a subject-subject relation. Motion presupposes the serial order (the diachronic order of earlier and later) and the order of equivalence (the synchronic order of simultaneity or co-existence), and it adds a new order, the uniform succession of temporal instants. Although a point on a continuous line has no unique successor, it is nevertheless assumed that a moving subject runs over the points of its path successively. Hence, relative motion is an intersubjective relation, irreducible to the preceding two. The law of uniformity concerns all kinds of relatively moving systems, including clocks. Therefore, it is possible to project kinetic time objectively on a linear scale, independent of the number of dimensions of kinetic space.
Contrary to kinetic time, the physical or chemical ordering of events is marked by irreversibility. Different events are physically related if one is the cause of the other, and this relation is irreversible. All physical and chemical things influence each other by some kind of interaction, by exchanging energy or matter, or by exerting a force on each other. Each physical or chemical process consists of interactions. Therefore, the interaction between two things should be considered the universal physical subject-subject relation. Interaction presupposes the relation frames of quantity, space, and motion.
The biotic order may be characterized by rejuvenating and ageing, both in organisms and in populations. An organism germs, ripens and rejuvenates itself by reproduction before it ages. By natural selection, populations rejuvenate themselves before they die out. For the biotic relation frame, the genetic law is universally valid. Each living being descends from another one, all living organisms are genetically related. This applies to the cells, tissues, and organs of a multicellular plant or animal as well. Descent and kinship as biotic subject-subject relations determine the position of a cell, a tissue or an organ in a plant or an animal, and of an organism in one of the biotic kingdoms. Hence, the genetic law constitutes a universal relation frame for all living beings.
The psychic order is being goal-directed. Behaviour, the universal mode of existence of all animals, is directed to future events. Recollection, recognition and expectation connect past experiences and present insight to behaviour directed to the future. Internal and external communication and processing of information are inter- and intra-subjective processes, enabling psychic functioning. Animals are sensitive for each other. By means of their senses, they experience each other as partners; as parents or offspring; as siblings or rivals; as predator or prey. By their mutual sensitivity, animals are able to make connections, between cells and organs of their body, with their environment, and with each other.
After these six natural relation frames, to be investigated in chapters 2-7, ten normative frames will be discussed in chapters 8-19.
These sixteen relation frames are not independent of each other. Except for the final one, all relation frames anticipate the succeeding frames. For instance, the set of real numbers anticipates both spatial continuity and uniform motion. Reversely, each relation frame refers back to preceding frames. The subject-subject relations of one relation frame can be projected onto those of a former one. Numbers represent spatial positions, and motions are measured by comparing distances covered in equal intervals.
These projections are often expressed as subject-object relations. A spatial magnitude like length is an objective property of physical bodies. The possibility to project physical relations on quantitative, spatial and kinetic ones forms the foundation of all physical measurements. Each measurable property requires the availability of a metric: a law for the relations to be measured and their projections. For instance, energy, force and current are generalized projections of physical interaction on quantitative, spatial and kinetic relations respectively.
1.3. Characters and character types
The realist idea of law assumes the existence of invariant natural laws and normative principles. These are not a priori stated as in a rationalist philosophy, but discovered as in the empirical sciences. As a consequence, law statements are fallible and revisable. Laws and principles give rise to recognizable clusters of two kinds. General laws for relations determine six natural relation frames and ten normative ones. Clusters of specific laws form characters and character types for individual things and events, artefacts and associations. Therefore, relations and characters complement each other. As will be shown, character types can be distinguished with the help of relation frames. Postponing the discussion of normativity, the present section deals with natural characters and their relations.
In the history of science a shift is observable from the search for universal laws, via structural laws, toward characters, determining processes besides structures. Even the investigation of structures is less ancient than might be expected. Largely, it dates from the nineteenth century. In mathematics, it resulted in the theory of groups describing symmetries, later to play an important part in physics and chemistry. Before the twentieth century, scientists were more interested in observable and measurable properties of materials than in their structure. Initially, the concept of a structure was used as an explanans, as an explanation of properties. Later on, structure as explanandum, as object for research, came to the fore. During the nineteenth century, the atomic theory functioned to explain the properties of chemical compounds and gases. In the twentieth century, atomic research was directed to the structure and functioning of the atoms themselves. Of course, people have always investigated the design of plants and animals. Yet, as an independent discipline, biology established itself not before the first half of the nineteenth century. Ethology, the science of animal behaviour, only emerged in the twentieth century.
Mainstream philosophy does not pay much attention to structures. Philosophy of science is mostly concerned with epistemological problems (for instance, the meaning of models), and with the general foundations of science. A systematic philosophical analysis of characters is wanting. This is strange, for characters form the most important subject matter of twentieth-century research, in mathematics as well as in the physical and biological sciences.
It is quite common to speak of the structure of thing-like individuals having a certain stability and lasting identity, like atoms, molecules, plants and animals. However, the concept of a structure is hardly applicable to individual events or processes, which are transitive rather than stable and lack a specific form. A dictionary description of the word ‘structure’ would be the manner in which a building or organism or other complete whole is constructed, how it is composed from spatially connected parts. In this sense, an electron has no structure, yet it is no less a characteristic whole than an atom. Depending on temperature and pressure, a solid like ice displays several different crystal structures. The typical structure of an animal, its size, appearance, and behaviour depend characteristically on its sex and age, changing considerably during its development. The structure of an individual subject is changeable, whereas its kind remains the same.
A character may be considered a specific structure. A character defined as a cluster of natural laws, values, and norms is not the structure of, but the law for individuality, indicating how an individual may differ from other individuals. The character of something includes its structure if it has one. It points out which properties it has and which propensities; how it relates to its environment; under which circumstances it exists; how it comes into being, changes and perishes. In this sense, an electron has no structure, but it has a character. Often, a character implies several structures. The structure of water is crystalline below 0oC, gaseous above 100oC, and liquid in between.
A character often shares its laws (sometimes expressed as objective properties) with other characters. Electrons are characterized by having a certain mass, electric charge, magnetic moment, and lepton number. Positrons have the same mass and magnetic moment, but different charge and lepton number. Electrons and neutrino’s have the same lepton number but different mass, charge and magnetic moment. Electrons, positrons and neutrino’s are fermions, but so are all particles, which are not bosons. Therefore, it is never a single law, but always a specific cluster of laws that characterizes things or events of the same kind.
These clusters should not be considered as definitions in a logical sense. It is very well possible to define electrons objectively by their properties like mass and charge only. But this definition says very little about the laws concerning other properties, like the electron’s spin, magnetic moment, or lepton number. The definition does not tell that an electron is a fermion, that it has an antiparticle by which it can be annihilated, or that it belongs to the first of three generations of leptons and quarks. It does not follow from a definition that electrons have the tendency to become interlaced in atoms or metals and in events like oxidation or lightning. It does not depend on a definition that electrons have the disposition to play a part in electric and electronic appliances. Although science needs definitions, theories stating laws are far more important. At the turn of the nineteenth century electrons were identified as charged particles, starting the age of electronics, but the laws for electrons were gradually discovered in a century of painstaking experimental and theoretical research. One can never be sure of knowing the character of a thing or event completely. Human knowledge of most natural kinds is very tentative, even if it were possible to define them fairly accurately by some of their objective properties.
Besides characters, character types should be mentioned. An iron atom satisfies a typical character, different from that of an oxygen atom. They have also properties in common, both belonging to the character type of an atom. Because a natural kind is characterized by a cluster of laws partly shared with other kinds, it is possible to find natural classifications, like the periodic system of the chemical elements or the taxonomy of plants and animals. One may discuss the generic character of an atom or the specific character of a hydrogen atom. From a chemical point of view all oxygen atoms have the same character, but nuclear physicists distinguish various isotopes of oxygen, each having its own character. The biological taxonomy of species, genera, etc., corresponds to a hierarchy of character types.
A character is not a single law, but a cluster of laws. It determines both a subjective class of potential and actual things or events of the same kind, and an objective ensemble of all possible states allowed by the character, describing the possible variation within a class.
The class of all potential things or events determined by a character is not restricted to a limited number, a certain place, or a period of time, but their actual number, place and temporal existence are usually restricted by circumstances like temperature. As a consequence of the supposition that natural laws are invariant, the class of individuals having the same character must be considered invariant as well. But the individual things and events belonging to this class are far from invariant. Any actual collection of individuals (even if it contains only one specimen) is a temporal and variable subset of the class. In an empirical or statistical sense, it is an example or a sample. A number of similar things may be connected into an aggregate, for instance a chemically homogeneous gas of molecules, or a population of interbreeding plants or animals of the same species. An aggregate is a temporal collection, a connected subset of the class defined above. Sometimes it is subject to a cluster of specific aggregate laws (like the gas laws). Probability is the relative frequency distribution of possibilities in a well-defined subset of an ensemble, subject to statistical laws. Empirical statistics is only applicable to a specific collection of individuals of the same kind.
As far as the realization of a character depends on external circumstances like the temperature of the environment, it is temporal, too. This is crucial for the understanding of astrophysical and biotic evolution.
A character considered as a cluster of laws determining the nature of a set of individuals allows of a certain amount of variation, giving room to the individuality of the things or events subject to the character. The range of individual variation is relatively small for quantitative, spatial and kinematic characters, larger for physical ones, and even more so for plants, fungi, or animals. The set of possibilities governed by a character may be called an ensemble. An ensemble’s elements are not things or events, but their objective states. An ensemble of objective possible states is as invariant as the corresponding class of potential subjective individuals. It is a set not bounded in number, space or kinetic time. It includes all possible variations of the individuals subject to the same character, whether the possibilities are realized or not. An ensemble reflects the similarity of the individuals concerned, the properties they have in common, and their possible differences, the variations allowed by the character. Variation means that a character allows of various possibilities, either in a specific or in a general sense. For instance, the character of a triangle allows of specific variation with respect to shape and magnitude, as well as its position, which is not specific. The idea of an ensemble is useful whenever an objective representation is available. In biology, the genotype of each organism is objectively projected on the sequence of nucleotides constituting its DNA-molecules, the so-called genetic code.
In three ways typical kinds are connected to the relation frames discussed above. Primarily, each kind is specifically qualified by the laws for one of the sixteen relation frames. The universal relation of physical interaction, specified as e.g. electric or gravitational, primarily characterizes physical and chemical things, processes and events. General and specific genetic laws constitute primarily the law clusters valid for living beings and life processes. The psychical relation frame, expressed in its goal-directed behaviour is the primary characteristic of an animal’s character. For natural characters, the qualifying relation frame is the final frame in which the things concerned can be subjects, in the succeeding frames they are objects. (This is not the case for normative characters.)
Each relation frame qualifies numerous characters. A traditional point of view acknowledges only three kingdoms of natural kinds, the physical-chemical or mineral kingdom, the plant kingdom and the animal kingdom. However, the quantitative, spatial and kinematic relation frames characterize clusters of laws as well. A triangle, for instance, has a spatial structure, oscillations and waves have primarily a kinematic character, and mathematical groups are quantitatively qualified.
Except for quantitative characters, a relation frame preceding the qualifying one constitutes the secondary characteristic, called its foundation. In fact, a character is not directly founded in a preceding frame, but in a projection of the primary (qualifying) relation frame on a preceding one. For instance, electrons are secondarily characterized by quantities, not by numbers however, but by physical magnitudes like mass, charge, and lepton number. These magnitudes determine to what amount an electron is able to interact with other physical subjects. Atoms, molecules and crystals have a characteristic spatial structure as a secondary characteristic, being as distinctive as the primary (physical) one.
For each primary type one expects as many secondary types as relation frames preceding the qualifying one. For biotically qualified wholes this means four secondary types, corresponding to projections of biotic relations on the quantitative, spatial, kinematic, and physical relation frames, respectively. Prokaryotes (bacteria) and some organelles in eukaryotic cells appear to be subject to law clusters founded in a quantitative projection of the biotic relation frame. Being the smallest reproductive units of life, they are genetically related by asexual multiplication, subject to the serial temporal order. In multicellular organisms, eukaryotic cells operate as units of life as well, but eukaryotic cell division starts with the division of the nucleus, having a prokaryotic structure. The character types for eukaryotic cells, multicellular undifferentiated plants and tissues in differentiated plants are founded in symbiosis, being the spatial expression of shared life.
The tertiary characteristic of a character is a disposition, the natural tendency or affinity of a character to become interlaced with another one, either because the individuals concerned cannot exist without each other (a eukaryotic cell cannot exist without its nucleus and organelles, and vice versa) or because an individual has a natural tendency to become a constitutive part of another one, in which it performs an objective function. Whereas the secondary characteristic refers to properties, the tertiary characteristic is usually a propensity. A particular molecule may or may not have an actual objective function in a plant, yet the propensity to exert such a function belongs to its specific cluster of laws. Interlacement makes characters dynamic.
Some prokaryotes have the disposition to be part of a eukaryotic cell (cell with a nucleus). In multicellular plants, a eukaryotic cell has the disposition to be a specialized part of a tissue or organ. Plants of a certain species have the propensity to occupy a certain niche, to interbreed and to be a member of a population. A population has the propensity to change genetically, eventually to evolve into a different species.
Tertiary characteristics imply a specific subject-object relation between individuals of different kinds. For instance, with respect to the cluster of laws constituting the structure of an atom, the atom itself is a subject, whereas its nucleus and electrons are objects. The nucleus and the electrons interact with each other, maintaining a physical subject-subject relation, but they do not interact with the atom of which they are constitutive parts. The relation of the atom to its nucleus and electrons is a subject-object relation determined by the laws for the atom. In turn, according to their characters nuclei and electrons have a disposition, a tendency, to become encapsulated within the fabric of an atom.
In physics and chemistry, the characters of atoms and molecules are studied without taking into account their disposition to become interlaced with characters primarily characterized by a later relation frame. But biochemistry is concerned with molecules such as DNA and RNA, having a characteristic function in living cells. Like other molecules these are physically qualified and spatially founded, witness the double-helix structure as a fundamental characteristic property of DNA. But much more interesting is the part these molecules play in the production of enzymes and the reproduction of cells, which is their biotic disposition.
Interlacement is only possible if the two or more subjects involved are somehow correlated to each other. Only because electrons and protons have exactly the same electric charge with opposite sign, atomic nuclei and electrons have the disposition to form electrically neutral, quite stable atoms. Atoms having an affinity to form a molecule adapt their internal charge distribution by exchanging one or more electrons (heteropolar bond); or by sharing a pair of electrons (homopolar bond); or by an asymmetric distribution of the electrons (dipolar bond). The character of a typical event like the emission of light is correlated with the characters of the emitting atom and the emitted photon.
Hence, taking into account its propensities, the specific laws for a physical subject like a molecule not only determine its structure and physical-chemical interactions, but its full dynamic meaning in the cosmos as well. The theory of interlacement steers a middle course between reductionism (stressing the secondary, foundational properties of things) and holism (emphasizing the tertiary functions of things in an encompassing whole).
More about character interlacement in section 1.4.
Many a thing or process that we experience as an individual unit turns out to be an aggregate of individuals. I shall call an individual thing an aggregate if it lacks a characteristic unity. Examples are a pebble, a wood, or a herd of goats. A process is an aggregate as well. It is a chain of connected events. For a physicist or a chemist, a plant is an aggregate of widely differing molecules, but for a biologist, a plant is a characteristic whole. An aggregate consists of at least two individual things, but not every set is an aggregate. The components should show some kind of coherence.
To establish whether something is an individual or an aggregate is not an easy matter. It requires knowledge of the character that determines its individuality. It appears to be important to distinguish between homogeneous and heterogeneous aggregates. A homogeneous aggregate is a coherent collection of similar individuals, for instance a wave packet conducting the motion of a photon or an electron; or a gas consisting of similar molecules; or a population of plants or animals of the same species. A heterogeneous aggregate consists of a coherent collection of dissimilar individuals, for instance a gaseous mixture like air, or an ecosystem in which plants and animals of various species live together.
1.4. Interlacement of characters
Even apart from the existence of aggregates, an individual never satisfies the simple character type described in section 1.3. Because of its tertiary characteristic, each character is interlaced with other characters. On the one side, character interlacement is a relation of dependence, as far as the leading character cannot exist without the characters interlaced in or with it. The character of a molecule exists thanks to the characters of its atoms. On the other hand, character interlacement rests on the disposition of a thing or event to become a part of a larger whole. If it actualizes its disposition, it retains its primary and secondary character largely. Sometimes characters are so strongly interlaced that one had better speak of a ‘dual character’, as in the wave-particle duality (4.3).
My distinction of two interlaced characters is an elaboration of Herman Dooyeweerd’s distinction of radical types, genotypes and variability types.[5]
Several types of character interlacement should be distinguished.
In the first type of interlacement, the whole has a qualifying relation frame different from those of the characters interlaced in the whole. In chapters 4 and 5 we shall meet this phenomenon in the wave-particle duality, where the particle character is physically qualified (particles interact with each other, which waves do not) and the wave character is primarily kinetic. As a measure of probability, the wave character anticipates physical interactions.
A second example is the physically qualified character of a DNA molecule being interlaced with the biotic character of a living cell. The molecule is physically qualified, the cell biotically. Their characters cannot be understood apart from each other. The cell is a biotic subject, the DNA-molecule a biotic object, the carrier of the genome, i.e., the ordered set of genes. A cell without DNA cannot exist, whereas DNA without a cell has no biotic function. The cell and the DNA molecule are mutually interlaced in a characteristic subject-object relation.
We find this type of interlacement in processes as well. For instance, the character of each biotic process is intertwined with that of a biochemical process. The behaviour of animals is interlaced with those of processes in their nervous system.
This may look like supervenience.[6] The idea of supervenience, usually applied to the relation of mind and matter, says that phenomena on a higher level are not always reducible to accompanying phenomena on a lower level. It is supposed that material states and processes invariantly lead to the same mental ones, but the reverse is not necessarily the case. A mental process may correspond with various material processes.
As David Papineau observes: ‘Supervenience on the physical means that two systems cannot differ chemically, or biologically, or psychologically, or whatever, without differing physically; or, to put it the other way round, if two systems are physically identical, then they must also be chemically identical, biologically identical, psychologically identical, and so on.’[7] This does not imply reductionism: ‘… I don’t in fact think that psychological categories are reducible to physical ones.’[8] According to Papineau, in particular natural selection implies that biology and psychology are not reducible to physics, contrary to chemistry and meteorology.[9] But elsewhere Papineau writes: ‘Everybody now agrees that the difference between living and non-living systems is simply having a certain kind of physical organization (roughly, we would now say, the kind of physical organization which fosters survival and reproduction)’,[10] without realizing that this does not concern a physical but a biotic ordering, and that natural selection, survival and reproduction are not physical but biological concepts.
Character interlacement implies much more than supervenience, which in fact is no more than a reductionist subterfuge.
The second type of interlacement occurs if one or more characters having the same qualifying relation frame but different foundations form a single whole.
For example, the character of an atom is interlaced with the characters of its nucleus and electrons. All these characters are physically qualified. The electron’s character is quantitatively founded, whereas the character of the nucleus is spatially founded like that of the atom. However, in the structure of the atom, the nucleus acts like a unit having a specific charge and mass, as if it were quantitatively founded, like the electrons. The (in this sense) quantitatively founded character of the nucleus and that of the electrons anticipate the spatially founded character of the atom. The nucleus and the electrons have a characteristic subject-subject relation, interacting with each other. Nevertheless, they do not interact with the atom of which they are a part, for they have a subject-object relation with the atom, and interaction is a subject-subject relation.
In the third type of interlacement of characters, there is no anticipation of one relation frame to another. For instance, in the interlacement of atomic groups into molecules all characters are physically qualified and spatially founded. For another example, the character of a plant is interlaced with those of its organs like roots and leaves, tissues and cells. Each has its own biotic character, interlaced with that of the plant as a whole. A comparable hierarchy of characters we find in two-, three- or more-dimensional spatial figures. A square is a two-dimensional subject having an objective function as the side of a cube.
Characters of processes are interlaced with the characters of the things involved. Individual things come into existence, change and perish in specific processes. Complex molecules come into existence by chemical processes between simpler molecules. A cell owes its existence to the never ending process called metabolism: respiration, photosynthesis, transport of water, acquisition of food, and secretion of waste, dependent on the character of the cell.
Usually processes occur on the substrate of things, and many thing-like characters depend on processes. Quantum physics assumes that even the most elementary particles are continuously created and annihilated. The question of which is the first, the thing or the process, has no better answer than that of the chicken and its egg. There is only one cosmos in which processes and things occur, generating each other and having strongly interlaced characters.
When a character is interlaced with another one, its properties change without disappearing entirely. If an atom becomes part of a molecule, its character remains largely the same, even if its distribution of charge is marginally adapted.
It is interesting that molecules have properties that the composing atoms do not have. A water molecule has properties which are absent in the molecules or atoms of hydrogen or oxygen. Water vapour is a substance completely different from a mixture of hydrogen and oxygen. This universally occurring phenomenon is called emergence.[11] The theory of emergence states that at a higher level new properties emerge that do not occur at a lower level, the whole is more than the sum of its parts.[12] In suit of Theodosius Dobzhansky, Ledyard Stebbins[13] speaks of ‘transcendence’: ‘In living systems, organization is more important than substance. Newly organized arrangements of pre-existing molecules, cells, or tissues can give rise to emergent or transcendent properties that often become the most important attributes of the system’[14]. Besides the emergence of the first living beings and of humanity, Stebbins mentions the following examples: the first occurrence of eukaryotes, of multicellular animals, of invertebrates and vertebrates, of warm-blooded birds and mammals, of the higher plants and of flowering plants. According to Stebbins, reductionism and holism are contrary approximations in the study of living beings, with equal and complementary values. It plays a part in discussions between reductionists and holists, not only in biology or in anthropology, but also in physics, where the planned construction of the superconducting supercollider (SSC) about 1990 gave rise to fierce discussions. Supporters (among whom Steven Weinberg) assumed that the understanding of elementary particles will lead to the explanation of all material phenomena. Opponents stated that for instance solid state physics owes very little to a deeper insight into sub-atomic processes.[15]
Emergence is expressed in the symmetry of a system, for instance. A free atom has the symmetry of a sphere, but this is no longer the case with an atom being a part of a molecule. The atom adapts its symmetry to that of the molecule by lowering its spherical symmetry. The symmetry of the molecule is not reducible to that of the composing atoms. Symmetries (not only spatial ones) and symmetry breaks play an important part in physics and chemistry. ‘Constraints’ like initial and boundary conditions are possible causes of a symmetry break.
The theory of character interlacement leads to an improved insight into the phenomenon of emergence. When a thing gets interlaced with another one its properties change without getting lost completely. When an atom becomes a part of a molecule its character remains recognizable, even if marginally changed. In a molecule an atom may become an ion, for instance, but the nucleus and the inner electrons are hardly influenced by chemical reactions. But the molecule’s properties differ considerably from those of the composing atoms. A water molecule has properties irreducible to the properties of hydrogen and oxygen. Water vapour is completely different from a gaseous mixture of hydrogen and oxygen. This widely occurring phenomenon is called the emergence of new properties.
This should be distinguished from the emergence of individuals belonging to a different character than those of the composing individuals. A typical example is the formation of a molecule from atoms or molecules, which is only possible if the composing atoms and molecules have the disposition to become interlaced with each other. Hydrogen and oxygen molecules, both consisting of two atoms, have the disposition to form water molecules only after they have broken their molecular bond. Within the structure of the water molecule, some properties of hydrogen atoms and oxygen atoms are recognizable, but hydrogen and oxygen lack several typical properties of a water molecule.
The phenomenon of the emergence of new characters plays an important part in the natural evolution understood as the realization of characters that did potentially but not actually exist before. Invariant characters come into actual existence if the circumstances permit it. Evolution occurs at the subject side of natural characters, not at their law side. Yet natural evolution is not a completely random process, but lawful dynamic development.
Scientific classification is different from the typology of characters based on universal relation frames. Classification means the formation of sets of characters based on specific similarities and differences. This is possible because each character is a set of laws, which it partly shares with other characters. A set of characters is determined by having some specific laws in common. An example of a specific classification is the biological taxonomy of living beings according to species, genera, etc. Other examples are the classification of chemical elements in the periodic system, of elementary particles in generations of leptons and quarks, and of solids according to their crystalline structure (5.3, 5.4).
Because specific classifications rest on specific laws, the chemical classification of the elements is hardly comparable to the biological classification of species. The general typology of characters developed in this encyclopaedia is applicable to widely different branches of natural science and may therefore lead to a deepened understanding of characters. Moreover, the typology provides insight into the coherence and the meaning of characters.
Each individual thing is either a subject or an object with respect to any relation frame in a way determined by its primary, secondary, and tertiary characteristics. Individual things and events present themselves in their relations to other things and events, allowing us to establish their identity.
The meaning of a thing or event can only be found in its connection with other things and events, and with the laws valid for them. In addition, the meaning of a character comes to the fore only if we take into account its interlacements with other characters. For instance, it is possible to restrict a discussion of water to its physical and chemical properties. Its meaning, however, will only become clear if we include in the discussion that water is a component of many other materials. Water plays a part in all kinds of biotic processes, and it appeases the thirst of animals and humans. Water has a symbolic function in our language and in many religions. The study of the character of water is not complete if restricted to the physical and chemical properties. It is only complete if we consider the characteristic dispositions of water as well. ‘Nowhere else is the intrinsic untenability of the distinction between meaning and reality so conclusively in evidence as in things whose structure is objectively qualified.’[16]
Likewise, the meaning of individual things and events is only clear in their lawful relations with other individuals. These relations we have subsumed in relation frames, which are of profound significance for the typology of characters. We find the meaning of the cosmos in the coherence of relation frames and of characters, and in particular in the religious concentration of humankind to the orgin of the creation, as we have seen before.
The theory of characters as applied in this encyclopaedia rests on the presupposition that a character as a set of laws determines the specific nature of things or processes. Such a set leaves room for individual variation. Hence, the theory is not deterministic. Reality has both a law side and a subject side that cannot be separated. Both are always present. In each thing and each process, we find lawfulness besides individuality.
The theory of characters is not essentialist either. Essentialism means the hypostatization of being (Latin: esse), contrary to the view that the meaning of anything follows from its relations to everything else. According to Herman Dooyeweerd, the ‘meaning nucleus’ and its ‘analogies’ with other aspects determine the meaning of each modal aspect. However, this incurs the risk of an essentialist interpretation, as if the meaning nucleus together with the analogies determines the ‘essence’ of the modal aspect concerned. In my view, the meaning of anything is determined by its relations to everything else, not merely by the universal relations as grouped into the relation frames, but by the mutual interlacements of the characters as well.
The primary characteristic of each character is not determined by a property of the thing or process itself. Rather, its relations with other things or processes, subject to the laws of a relation frame, are primarily characteristic of a character. Besides, the secondary and tertiary characteristics concern relations subject to general and specific laws as well. In particular the tertiary characteristic, the way by which a character is interlaced with other characters, provides meaning to the things and processes concerned. Essentialism seeks the meaning (the essence) of characters in the things and events themselves, attempting to catch them into definitions. In a relational philosophy, definitions do not have a high priority.
The theory of characters is not reductionistic. This statement may be somewhat too strong, for there is little objection to raise against ‘constitutive reductionism’. This concept states that all matter consists of the same atoms or sub-atomic particles, and that physical and chemical laws act on all integration levels. According to Ernst Mayr, ‘constitutive reductionism … asserts that the material composition of organisms is exactly the same as found in the inorganic world. Furthermore, it posits that none of the events and processes encountered in the world of living organisms is in any conflict with the physical or chemical phenomena at the level of atoms and organisms. These claims are accepted by modern biologists. The difference between inorganic matter and living organisms does not consist in the substance of which they are composed but in the organization of biological systems.’ [17] Mayr rejects every other kind of reductionism. ‘Reduction is at best a vacuous, but more often a thoroughly misleading and futile, approach.’[18]
The theory of characters supposes that the laws for physical and chemical relations cannot be reduced to laws for quantitative, spatial, and kinetic relations. However, we have observed already that physical and chemical relations can be projected onto quantitative, spatial and kinetic relations. This explains the success of ‘methodical reductionism’. It asserts the existence of laws for biotic and psychic relations transcending the physical and chemical laws. It is at variance with a stronger form of reductionism, presupposing that living organisms only differ from molecules by a larger degree of complexity,[19] whether or not supplied by the phenomena of supervenience and emergence. Richard Dawkins calls his view ‘hierarchical reductionism’, that ‘… explains a complex entity at any particular level in the hierarchy of organization, in terms of entities only one level down the hierarchy; entities which, themselves, are likely to be complex enough to need further reducing to their own component parts; and so on. It goes without saying - … - that the kinds of explanations which are suitable at high levels in the hierarchy are quite different from the kinds of explanations which are suitable at lower levels.’ Dawkins rejects the kind of reductionism ‘… that tries to explain complicated things directly in terms of the smallest parts, even, in some extreme versions of the myth, as the sum of the parts…’.I believe that the phenomenon of character interlacement gives a better representation of reality.
The theory of characters cannot be argued on a priori grounds. As an empirical theory, it should be justified a posteriori, by investigating whether it agrees with scientific results. This we shall do in the chapters to come.
1.5. Natural evolution and cultural history
The philosophy of dynamic development assumes that for understanding the temporal creation insight in relations and characters is required. Contrary to Dooyeweerd’s and Vollenhoven’s philosophy of the cosmonomic idea, I believe that the modal aspects are not primarily modes of existence or modes of human experience (which they certainly are), but formost determine relations.With respect to characters, I emphasize much more their tertiairy propensities or dispositions than their primary and secondary properties.
This means that relations and characters are not static, but dynamic. The creation develops continuously according to constant laws, general laws grouped into relation frames and specific laws constituting character types. The natural evolution (to be discussed in part I of the Encyclopaedia)is a rather slow process, starting some fourteen billion years ago and processing almost invisibly. In the historical development (part II) humankind takes an active part. Since its slow start, perhaps two hundred thousand years ago, it is more and more accelerating.
The anticipatory structure of the relation frames accounts for the start of any dynamic development (1.2), but the key for its understanding is the tertiary characteristic of characters, their disposition to become interlaced with each other (1.3, 1.4). Because of these dispositions the numerical, spatial and kinetic characters play a decisive part in the evolution of physical and biotic characters.
Herman Dooyeweerd observes that meaning is the mode of being of all that is created.[20] It becomes manifest whenever something new comes into being. The tertiary characteristics of natural things and events point to the possibility of the emergence of new structures with emerging new properties and propensities. It provides the original characters with meaning, their proper position in the creation. The phenomenon of disposition shows that material things like molecules have meaning for living organisms. It shows that organisms have meaning for animal life. The assumption that God’s people are called from the animal world gives meaning to the existence of animals. Both evolution and history display the meaningful development of the creation, the coming into being of ever more structures. Artefacts, in particular written texts, are the most important witnesses of history.[21] They provide history with an objective basis, complementing the normative meaning of history, provided by the directive time order in the relation frames,[22] and the subjective attribution of meaning by individuals, associations and unorganised communities in their history shaping transfer of experience.[23]
As a starting point this implies a realist religious view, confessing that God created the world according to laws which are invariant because he sustains them. We know God through Jesus Christ, who submitted himself to God’s laws. Partial knowledge of his laws can be achieved by studying the law-conformity of the creation. This also implies a dynamic view of the creation, as developing continuously, in the natural evolution and in particular in human history.
The theory of evolution may be able to provide necessary conditions for the emergence of human affairs, but by no means sufficient conditions. As a religious statement, we may take the biblical message to reveal that humanity was led out of the animal world, called to be free and challenged to take responsibility for the development of God’s creation. This may truly be called the cultural mandate of humanity. The relation frames introduced above include the relations of anyone with their true or imagined God. For Christians, these relations exist through Jesus Christ, who came into the world fulfilling God’s laws for the creation, leading his people out of the animal world.
[1] Stafleu 2015, 2017.
[2] Stafleu 2019, chapter 6.
[3] Stafleu 2018b, chapter 12.
[4] Stafleu 2019, chapter 7.
[5] Dooyeweerd 1953-1958, III, 83, 89-96, 109-128.
[6] Charles, Lennon (eds.) 1992, 14-18.
[7] Papineau 1993, 10:
[8] Papineau 1993, 44.
[9] Papineau 1993, 47; Plotkin 1994, 52, 55; Sober 1993, 73-77
[10] Papineau 1993, 122,
[11] Stafleu 2018.
[12] Popper 1972, 242-244, 289-295; 1974, 142; Popper, Eccles 1977, 14-31; Mayr 1982, 63-64.
[13] Stebbins 1982, 161-167
[14] Stebbins 1982, 167.
[15]Anderson 1995; Weinberg 1995; Kevles 1997; Cat 1998.
[16] Dooyeweerd 1953-1958, III, 107.
[17] Mayr 1982, 60.
[18] Mayr 1982, 63.
[19] Dawkins 1986, 13.
[20] Dooyeweerd NC, I, 4.
[21] Stafleu 2011, chapter 3.
[22] Stafleu 2011, chapter 1.
[23] Stafleu 2011, chapter 2.
Encyclopaedia of relations and characters,
their evolution and history
Chapter 2
2.1. Sets and natural numbers
2.2. Extension of the quantitative relation frame
2.3. Groups as characters
2.4. Ensemble and probability
Encyclopaedia of relations and characters. 2. Sets
2.1. Sets and natural numbers
Plato and Aristotle introduced the traditional view that mathematics is concerned with numbers and with space. Since the end of the nineteenth century, many people thought that the theory of sets would provide mathematics with its foundations. According to Ernst Zermelo, for instance, ‘set theory is that branch of mathematics whose task is to investigate mathematically the fundamental notions of ‘number’, ‘order’, and ‘function’ taking them in their pristine, simple form, and to develop thereby the logical foundations of all of arithmetic and analysis.’[1] Since the middle of the twentieth century, the emphasis is more on structures and relations: ‘Mathematics is the deductive study of structures’.[2]
In chapter 1, I defined a natural character as a set of natural laws, determining a class of individuals and an ensemble of possible variations. Because classes, ensembles and aggregates are sets, it is apt to pay attention to the theory of sets.
In sections 2.1-2.2 it will appear that with each set at least two relation frames are concerned, according to the tradition to be called the quantitative and the spatial frames. The elements of a quantitative or discrete set can be counted, whereas the parts of a spatial or continuous set can be measured. Section 2.3 discusses some quantitatively qualified characters, in particular groups. Section 2.4 relates the concept of an ensemble with that of probability.
Numbers constitute the relation frame for all sets and their relations. A set consists of a number of elements, varying from zero to infinity, whether denumerable or not, but there are sets of numbers as well. What was the first, the natural number or the set? Just as in the case of the chicken and the egg, an empiricist may wonder whether this is a meaningful question. We have only one reality available, to be studied from within. In the cosmos, we find chickens as well as eggs, sets as well as numbers. Of course, we have to start our investigations somewhere, but the choice of the starting point is relatively arbitrary. Rejecting the view that mathematics is part of logics, I shall treat sets and numbers in an empirical way, as phenomena occurring in the cosmos.
At first sight, the concept of a set is rather trivial, in particular if the number of elements is finite. Then the set is denumerable and countable; we can number and count the elements. It becomes more intricate if the number of elements is not finite yet denumerable (e.g., the set of integers), or infinite and non-denumerable (e.g., the set of real numbers). Let us start with finite sets.
Sets concern all kinds of elements, hence they are closer to concrete reality than numbers. (As a human act, collecting of fruits etc. is one of the oldest means to provide food.) Quantity or amount is a universal aspect of sets. It is an abstraction like the other five natural relation frames announced in section 1.2. For instance, by isolating the natural numbers we abstract from the equivalence relation.
Equivalence is reflexive (A º A), symmetric (if A º B, then B º A), and transitive (if A º B and B º C, then A º C). On the other hand, numbers are subject to the order of increasing magnitude. This sequential order is exclusive (either a > b, or b > a), asymmetric (if a > b, then b < a), not reflexive (a is not larger or smaller than a), but it is transitive (if a > b and b > c, then a > c). For numbers, the equivalence relation reduces to equality: a = a; if a = b then b = a; if a = b and b = c then a = c. Usually equivalence is different from equality, however.
Two sets A and B are numerically equivalent if their elements can be paired one by one, such that each element of A is uniquely combined with an element of B and conversely. All sets being numerically equivalent to a given finite set A constitute the equivalence class [n] of A. One element of this class is the set of natural numbers from 1 to n. All sets numerically equivalent to A have the same number of elements n. I consider the cardinal number n to be a discoverable property (e.g., by counting or calculating) of each set that is an element of the equivalence class [n]. The numbers 1…n function as ordinal numbers or indices to put the elements of the set into a sequence, to number and to count them. It is a law of arithmetic that in whatever order the elements of a finite set are counted, their number will always be the same.
Sometimes the elements of an infinite set can also be numbered. Then we say that the set is infinite yet denumerable. The set of even numbers, e.g., is both infinite and denumerable. As a set of indices, the natural numbers constitute a universal relation frame for each denumerable set. However, the set of natural numbers is a character class as well. It is relevant to distinguish relation frames from characters, but they are not separable.
Giuseppe Peano’s axioms formulate the laws for the sequence N of the natural numbers. The axioms apply the concepts of sequence, successor and first number, but it does not apply the concept of equivalence. According to Peano, the concept of a successor is characteristic for the natural numbers:
1. N contains a natural number, indicated by 0.
2. Each natural number a is uniquely joined by a natural number a+, the successor of a.
3. There is no natural number a such that a+ = 0.
4. From a+ = b+ follows a = b.
5. If a subset M of N contains the element 0, and besides each element a its successor a+ as well, then M = N.
Peano took 1 to be the first natural number. Nowadays one usually starts with 0, to indicate the number of elements in the zero set. In the decimal system 0+ = 1, 1+ = 2, 2+ = 3, etc. In the binary system 0+ = 1, 1+ = 10, 10+ = 11, 11+ = 100, etc. From axiom 2 it follows that N has no last number.
The fifth axiom states that the set of natural numbers is unique. The sequence of even numbers satisfies the first four axioms but not the fifth one. The transitive relation ‘larger than’ is now applicable to the natural numbers. For each a, a+>a. If a>b and b>c, then a>c, for each trio a, b, c.
On the axioms rests the method of proof by complete induction: if P(n) is a proposition defined for each natural number n > a; and P(a) is true; and P(n+) is true if P(n) is true; then P(n) is true for any n > a.
The natural numbers constitute a character class. Their character, expressed by Peano’s axioms, is primarily quantitatively characterized. It has no secondary foundation for lack of a relation frame preceding the quantitative one. Because the first relation frame does not have objects, it makes no sense to introduce an ensemble of possibilities besides any numerical character class.
As a tertiary characteristic, the set of natural numbers has the disposition to expand itself into other sets of numbers (2.2).
The laws of addition, multiplication, and raising powers are derivable from Peano’s axioms.[3] The class of natural numbers is complete with respect to these operations. If a and b are natural numbers, then a+b, a.b en ab are natural numbers as well. This does not always apply to subtraction, division or taking roots, and the laws for these inverse operations do not belong to the character of natural numbers.
In 1931, Gödel proved that any system of axioms for the natural numbers allows of unprovable statements.[4] This means that Peano’s axiom system is not logically complete.
Using the two ordering relations discussed, ‘larger than’ and ‘numerical equivalence’, we can order all denumerable sets. All sets having n elements are put together in the equivalence class [n], whereas the equivalence classes themselves are ordered into a sequence. The sets in the equivalence class [n] have no more in common than the number n of their elements.
The set of natural numbers is the oldest and best-known set of numbers. Yet it is still subject to active mathematical research, resulting in newly discovered regularities, such that ‘… the differences between mathematics and empirical science have been vastly exaggerated.’[5] ‘Even arithmetic contains randomness. Some of its truths can only be ascertained by experimental investigation. Seen in this light it begins to resemble an experimental science.’[6]
Some theorems relate to prime numbers. Euclid proved that the number of primes is unlimited. An arithmetical law says that each natural number is the product of a unique set of primes. Several other theorems concerning primes are proved or conjectured. For instance, Christian Goldbach’s conjecture, saying that each even number can be written as the sum of two primes in at least one way, dates from 1742, but is at the end of the twentieth century neither proved nor disproved.
In many ways, the set of primes is notoriously irregular. There is no law to generate them. If one wants to find all prime numbers less than an arbitrarily chosen number n, this is only possible with the help of an empirical elimination procedure, known as Eratosthenes’ sieve. From the set of natural numbers 1 to n, starting from 3 the sieve eliminates all even numbers, all triples, all quintets except 5, (the quartets and sixtuplets have already been eliminated), all numbers divisible by 7 except 7 itself, etc., until one reaches the first number larger than the square of n. Then all primes smaller than n remain on the sieve. For very large prime numbers, this method consumes so much time that the resolution of a very large number into its factors is used as a key in cryptography.
There are much more sequences of natural numbers subject to a characteristic law or prescription. An example is the sequence of Fibonacci (Leonardo of Pisa, circa 1200). Starting from the numbers 1 and 2, each member is the sum of the two preceding ones: 1, 2, 3, 5, 8, 13, … This sequence plays a part in the description of several natural processes and structures.[7]
The relation of a set to its elements is a numerical law-subject relation, for a set is a number of elements. By contrast, the relation of a set to its subsets is a whole-part relation that can be projected on a spatial figure having parts. A subset is not an element of the set, not even a subset having only one element. A set may be a member of another set. For instance, the numerical equivalence class [n] is a set of sets. A well-known paradox arises if a set itself satisfies its prescription, being an instance of self-reference. The standard example is the set of all sets that do not contain themselves as an element. Restricting the prescription to the elements of the set may preclude such a paradox. This means that a set cannot be a member of itself, not even if the elements are sets themselves.[8] The number of subsets is always larger than the number of elements, a set of n elements having 2n subsets. A set contains an infinite number of elements if it is numerically equivalent to one of its subsets. For instance, the set of natural numbers is numerically equivalent to the set of even numbers and is therefore infinite. However, the set of all subsets of a given set A should not be confused with the set A itself.
Overlapping sets have one or more elements in common. The intersection A or B of two sets is the set of all elements that A and B have in common. The empty set or zero set is the intersection of two sets having no elements in common. Hence, there is only one zero set. It is a subset of all sets. (This is a consequence of the axiom stating that two sets are identical if they have the same elements.) If a set is considered a subset of itself, each set has trivially two subsets. (An exception is the zero set, having only itself as a subset).
The union A and B of two sets looks more like a spatial than a numerical operation. Only if two sets have no elements in common, the total number of elements is equal to the sum of the numbers of elements of the two sets apart. Otherwise, the sum is less: If n(A) is the number of elements of A, then n(A and B) = n(A) + n(B) – n(A or B).
Hence, even for denumerable sets the numerical relation frame is not sufficient. At least a projection on the spatial relation frame is needed. This is even more true for non-denumerable sets (2.2).
Some sets are really spatial, like the set of points in a plane contained within a closed curve. As its magnitude, one does not consider the number of points in the set, but the area enclosed by the curve. The set has an infinite number of elements, but a finite measure. A measure is a magnitude referring to but not reducible to the numerical relation frame. It is a number with a unit, a proportion.
This measure does not deliver a numerical relation between a set and its elements. It is not a measure of the number of elements in the set. A measure is a quantitative relation between sets, e.g., between a set and its subsets. If two plane spatial figures do not overlap but have a boundary in common, the intersection of the two point sets is not zero, but its measure is zero. The area of the common boundary is zero. In general, only subsets having the same dimension as the set itself have a non-zero measure. We shall see in section 2.2 that all numbers (including the natural ones) determine relations between sets. Only the natural numbers relate countable sets with their elements as well.
Integral calculus is a means to determine the measure of a spatial figure, its length, area or volume. In section 2.4, we discuss probability being a measure of subsets of an ensemble.
For each determination of a measure, each measurement, real numbers are needed. That is remarkable, for an actual measurement can only yield a rational number (2.2).
The number 2 is natural, but it is an integer, a fraction, a real number and a complex number as well. Precisely formulated: the number 2 is an element of the sets of natural numbers, integers, fractions, real and complex numbers. This leads to the conjecture that we should not conceive of the character of natural numbers to determine a class of things, but a class of relations. The natural numbers constitute a universal relation frame for all denumerable sets. Peano’s formulation characterizes the natural numbers by a sequence, that is a relation as well. We shall see that the integers, the rational, real, and complex numbers are definable as relations. In that case, it is not strange that the number 2 answers different types of relations. A quantitative character determines a set of numbers, and a number may belong to several sets, each with its own character. The number 2 is a knot of relations, which is characteristic for a ‘thing’. On the other hand, it responds to various characters, and that is not very ‘thing-like’.
However, it is not fruitful to quarrel extensively about the question of whether a number is essentially a thing or a relation. Anyway, numbers are individual subjects to quantitative laws.
Encyclopaedia of relations and characters. 2. Sets
2.2. Extension of the quantitative
relation frame
The natural numbers satisfy laws for addition, multiplication, and taking powers, by which each pair of numbers generates another natural number. The inverse operations, subtraction, division and taking roots, are not always feasible within the set of natural numbers. Therefore, mathematics completes the set of natural numbers into the set of integers and the set of rational numbers. Put otherwise, the set of natural numbers has the disposition of generating the sets of integral numbers and of rational numbers. There remain holes in the set of rational numbers, there are still magnitudes (like the ratio of the diagonal of a square to one of its sides) which cannot be expressed in rational numbers. These holes are to be filled up by the irrational numbers. The various number sets constitute a hierarchy, consisting of the sets of, respectively, natural, integral, rational, real, and complex numbers. Each of these sets has a separate character. A natural number belongs to each of these sets. A negative integer belongs to all sets except that of the natural numbers. A fraction like ½ belongs to each set except the first two sets.
Before discussing the character of integral, rational, real, and complex numbers, I mention some properties.
Starting from its element 0, the set of integral numbers can also be defined by stating that each element a has a unique successor a+ as well as a unique predecessor a-, if (a+)- = a.[9] Each integer is the difference between two natural numbers. Several pairs may have the same difference. Hence, each integral number corresponds to the equivalence class of all pairs of integers having the same difference. Likewise, each rational number corresponds to the equivalence class of all pairs of natural, integral and rational numbers having the same proportion or the same difference. If we do not want to relapse into an infinite regress, we had better not identify (in the way of an essentialist definition) an integer or a rational number with an equivalence class. The meaning of a number depends on its relation to all other numbers and the disposition of numbers to generate other numbers.[10]
The laws for addition, subtraction, multiplication, and division are now valid for the whole domain of rational numbers, including the natural and integral numbers. It can be proved that the sum, the difference, the product and the quotient of two rational numbers (excluding division by 0) always gives a rational number. Hence, the set of rational numbers is complete or closed with respect to these operations. After the recognition of the natural numbers as a set of indices, the introduction of negative and rational numbers means a further abstraction with respect to the concept of a set. A set cannot have a negative number of elements, and halving a set is not always possible. The integral and rational numbers are not numbers of sets, but quantitative relations between sets. They are applicable to other domains as well, for instance to the division of an apple. The universal applicability of the quantitative relation frame requires the extension of the set of natural numbers.
Meanwhile, two properties of natural numbers have been lost. Neither the integral nor the rational numbers have a first one, though the number 0 remains exceptional in various ways. Moreover, a rational number has no unique successor. Instead of succession, characteristic for the natural and integral numbers, rational numbers are subject to the order of increasing magnitude. This corresponds to the quantitative subject-subject relations (difference and proportion): if a > b then ab > 0, and if moreover b > 0 then a/b > 1. For each pair of rational numbers, it is clear which one is the largest, and for each trio, it is clear which one is between the other two.
The classes of natural numbers, integers and rational numbers each correspond to a character of their own. These characters are primarily qualified by quantitative laws and lack a secondary characteristic. We shall see that the character of the rational numbers has the (tertiary) disposition to function as the metric for the set of real numbers.
The road from the natural numbers to the real ones proceeds via the rational numbers. A set is denumerable if its elements can be put in a sequence. Georg Cantor demonstrated that all denumerable infinite sets are numerically equivalent, such that they can be projected on the set of natural numbers. Therefore, he accorded them the same cardinal number, called aleph-zero, after the first letter of the Hebrew alphabet. Cantor assumed this ‘transfinite’ number to be the first in a sequence, aleph 0, aleph 1, aleph 2 … , where each is defined as the ‘power set’ of its predecessor, i.e., the set of all its subsets.
The rational numbers are denumerable, at least if put in a somewhat artificial order. The infinite sequence 1/1;1/2, 2/1;1/3, 2/3, 3/1, 3/2; 1/4, 2/4, 3/4, 4/1, 4/2, 4/3; 1/5, … including all positive fractions is denumerable. In this order it has the cardinal number of aleph 0. However, this sequence is not ordered according to increasing magnitude.
In their natural (quantitative) order of increasing magnitude, the fractions lay close to each other, forming a dense set. This means that no rational number has a unique successor. Between each pair of rational numbers a and b there are infinitely many others. (If a < b then a < a+c(b-a) < b, for each rational value of c with 0 < c < 1.) In their natural order, rational numbers are not denumerable, although they are denumerable in a different order. Contrary to a finite set, whether an infinite set is countable may depend on the order of its elements.
Though the set of fractions in their natural order is dense, it is still possible to put other numbers between them. These are the irrational numbers, like the square root of 2 and pi. According to the tradition, Pythagoras or one of his disciples discovered that he could not express the ratio of the diagonal and the side of a square by a fraction of natural numbers. Observe the ambiguity of the word ‘rational’ in this context, meaning ‘proportional’ as well as ‘reasonable’. The Pythagoreans considered something reasonably understandable, if they could express it as a proportion. They were deeply shocked by their discovery that the ratio of a diagonal to the side of a square is not rational. The set of all rational and irrational numbers, called the set of real numbers, turns out to be non-denumerable. I shall argue presently that the set of real numbers is continuous, meaning that no holes are left to be filled.
Only in the nineteenth century, the distinction between a dense and a continuous set became clear.[11] Before, continuity was often defined as infinite divisibility, not only of space. For ages, people have discussed about the question whether matter would be continuous or atomic. Could one go on dividing matter, or does it consist of indivisible atoms? In this case, tertium non datur is invalid. There is a third possibility, generally overlooked, namely that matter is dense.
Even the division of space can be interpreted in two ways. The first was applied by Zeno of Elea when he divided a line segment by halving it, then halving each part, etc. This is a quantitative way of division, not leading to continuity but to density. Each part has a rational proportion to the original line segment. Another way of dividing a line is by intersecting it by one or more other lines. Now it is not difficult to imagine situations in which the proportion of two lines segments is irrational. (For instance, think of the diagonal of a square.) This spatial division shows the existence of points on the line that quantitative division cannot reach.
In 1892, Georg Cantor proved by his famous diagonal method that the set of real numbers is not denumerable. Cantor indicated the infinite amount of real numbers by the cardinal number C. He posed the problem of whether C equals aleph 1, the transfinite number succeeding aleph 0. At the end of the twentieth century, this problem was still unsolved. Maybe it is not solvable. Maybe it is an independent axiom.
A theorem states that each irrational number is the limit of an infinite sequence or series of rational numbers, e.g., an infinite decimal fraction. A sequence is an ordered set of numbers (a, b, c, …). Sometimes an infinite sequence has a limit, for instance, the sequence 1/2, 1/4, 1/8, … converges to 0. A series is the sum of a set of numbers (a+b+c+…). An infinite series too may have a limit. For instance, the series 1/2+1/4+1/8+… converges to 1.
This seems to prove that the set of real numbers can be reduced to the set of rational numbers, like the rational numbers are reducible to the natural ones, but that is arguable. Any procedure to find these limits cannot be done in a countable way, not consecutively. This would only lead to a denumerable (even if infinite) amount of real numbers. By multiplying a single irrational number like pi with all rational numbers, one finds already an infinite, even dense, yet denumerable subset of the set of real numbers. Also the introduction of real numbers by means of ‘Cauchy-sequences’ only results in a denumerable subset of real numbers. To arrive at the set of all real numbers requires a non-denumerable procedure. But then we would use a property of the real numbers (not shared by the rational numbers) to make this reduction possible. And this appears to result in circular reasoning, begging the question. (A Cauchy sequence, named after Augustin-Louis Cauchy, is a sequence whose elements become arbitrarily close to each other as the sequence progresses.)
Suppose we want to number the points on a straight or curved line, would the set of rational numbers be sufficient? Clearly not, because of the existence of spatial proportions like that between the diagonal and the side of a square, or between the circumference and the diameter of a circle. Conversely, is it possible to project the set of rational numbers on a straight line? The answer is positive, but then many holes are left. By plugging the holes, we get the real numbers, in the following empirical way. (This procedure differs from the standard treatment of real numbers.[12])
Consider a continuous line segment AB. We want to mark the position of each point by a number giving the distance to one of the ends. According to the axiom of Cantor-Dedekind, there is a one-to-one relation between the points on a line and the real numbers. These numbers include the set of infinite decimal fractions that Cantor proved to be non-denumerable. Hence, the set of points on AB is not denumerable. If we mark the point A by 0 and B by 1, each point of AB gets a number between 0 and 1. This is possible in many ways, but one of them is highly significant, because we can use the rational numbers to introduce a metric. We assign the number 0.5 to the point halfway between A and B, and analogously for each rational number between 0 and 1. (This is possible in a denumerable procedure). Now we define the real numbers between 0 and 1 to be the numbers corresponding one-to-one to the points on AB. These include the rational numbers between 0 and 1, as well as numbers like pi/4 and other limits of infinite sequences or series. The irrational numbers are surrounded by rational numbers (forming a dense set) providing the metric for the set of real numbers between 0 and 1.
A set is called continuous if its elements correspond one-to-one to the points on a line segment. It is not difficult to prove that the points on two different line segments correspond one-to-one to each other. On the one hand, the continuity of the set of real numbers anticipates the continuity of the set of points on a line. On the other hand, it allows of the possibility to project spatial relations on the quantitative relation frame.
The set of real numbers is continuous because it does not contain any holes, contrary to the dense set of rational numbers. The above mentioned procedures to divide a segment of a line, or to project the real numbers between 0 and 1 on a line segment, justify the following statement. Divide the ordered set of numbers into two subsets A and B, such that each element of A is smaller than each element of B. Then there is an element x of or of B, that is larger than all (other) elements of A and smaller than all (other) elements of B. This is called (Richard) Dedekind’s cut. The boundary element x can be rational or irrational. This means that the set of real numbers is complete with respect to the order of increasing magnitude, there are no holes left.
The set of real numbers constitutes the quantitative relation frame for spatial relations. Spatial measures like distance, length, area, and angle are projections on sets of numbers. To express spatial relations as magnitudes requires real numbers. Besides spatial relations, kinetic, physical, and chemical magnitudes are expressed in real numbers. This is remarkable, considering the practice of measuring. Each measurement is inaccurate to a certain extent. Therefore, a measurement never yields anything but a rational number. Moreover, computers rely on rational numbers. Hence, the use of real numbers has a theoretical background. The assumption that a magnitude is continuously variable is not empirically testable.
Encyclopaedia of relations and characters. 2. Sets
2.3. Groups as characters
Mathematics knows several structures that I consider quantitative characters. Among these, the character of mathematical groups expressing symmetries is of special interest to natural science.
A group is a set of elements that can be combined such that each pair generates a third element. In the world of numbers, such combinations are addition or multiplication. Because of the mutual coherence of the elements, a group may be considered an aggregate. The phenomenon of isomorphy allows of the projection of physical states of affairs on mathematical ones.
In 1831, Évariste Galois introduced the concept of a group in mathematics as a set of elements satisfying the following four axioms.
1. A combination procedure exists, such that each pair of elements A and B unambiguously generates a new element AB of the group.
2. The combination is associative, i.e., (AB)C = A(BC), to be written as ABC.
3. The group contains an element I, the identity element, such that for each element A of the group, AI = IA = A.
4. Each element A of the group has an inverse element A’, such that AA = AA’ = I.
A group is called Abelean (after N.H. Abel) or commutative if for each A and B, AB = BA. This is by no means always the case.
It can be proved that each group has only one identity element, that each element has only one inverse element, and that I’ = I. Each group has at least one element, I. (Hence, the zero set is not a group.) If a subset of the group is a group itself with the same combination rule, then both groups share the identity element.
It is clear that the elements of a group are mutually strongly connected. They have a relation determined by the group’s character, to be defined as AB’, the combination of A with the inverse of B. The relation of an element A to itself is AA’ = I, A is identical with itself. Moreover, (AB’)’ = BA’, the inverse of a relation of A to B is the relation of B to A.
Each group is complete. If we combine each element with one of them, A, the identity element I is converted into A, and the inverse of A becomes I. The new group as a whole has exactly the same elements as the original group. Hence, the combination of all elements with an element A is a transition of a group into itself. It expresses a symmetry, in which the relations between the elements are invariant.[13]
If two groups can be projected one-to-one onto each other, they are called isomorphic. Two groups are isomorphic if their elements can be paired such that A1B1 = C1 in the first group implies that A2B2 = C2 for the corresponding elements in the second group and conversely. This may be the case even if the combination rules in the two groups are different. The phenomenon of isomorphy means that the character of a group is not fully determined by the axioms alone. Besides the combination rule, at least some of the group’s elements must be specified, such that the other elements are found by applying the combination rule.
Isomorphy allows of the projection of one group onto the other one. It leads to the interlacement of various characters, as we shall see in the next few chapters. Hence, isomorphy is a tertiary property of groups, a disposition.
The elements of a group may be numbers, or number vectors, or functions of numbers, or operators transforming one function into another one. In physics, groups were first applied in relativity theory, and since 1925 in quantum physics and solid state physics. Not to everyone’s delight, however, see e.g. John Slater about the ‘Gruppenpest’: ‘… it was obvious that a great many other physicists were as disgusted as I had been with the group-theoretical approach to the problem.’[14]
Let us first cast a glance at some number groups.
The first examples of groups we find in sets of numbers. Adding or multiplying two numbers yields a third number. With respect to addition, 0 is the identity element, for a+0=0+a=a for any number a. Besides 0, it is sufficient to introduce the number 1 in order to generate the whole group of integral numbers: 1+1=2, 1+2=3, etc. The inverse of an integer a is –a, for a+(-a)=0. The relation of a and b is the difference a-b. Instead of beginning with 1, we could also start with 2 or with 3, generating the groups of even numbers, threefold numbers, etc. Each of these subgroups is complete and isomorphic with the full group of integers.
The rational, real, and complex numbers, too, each form a complete addition group, but the natural numbers do not constitute a group. The natural numbers form a class with a quantitatively qualified character, expressed by Giuseppe Peano’s axioms (2.1) or an alternative formulation. However, this character does not include the laws for subtraction and division, because the set of natural numbers is not complete with respect to these operations.
The mentioned groups are infinite, but there are finite groups of numbers as well. Two numbers are called ‘congruent modulo x’ if their difference is an integral multiple of x.The four integral numbers 0, 1, 2, and 3 form a group with the combination rule of ‘adding modulo 4’. If the sum of two elements would exceed 3, we subtract 4 (hence 3+2=1, and 4=0). If the difference would be less than 0 we add 4 (hence 2-3=3). This group is isomorphic to the rotation group representing the symmetry of a square. Likewise, the infinite but bounded set of real numbers between 0 and 2pi constituting the addition group modulo 2pi is isomorphic to the rotation group of a circle.
In the multiplication of numbers, 1 is the identity element. For each number a, 1.a = a.1 = a. The inverse of multiplication is division, 1/a being the inverse of a. The relation between a and b is their proportion a/b. Introducing the positive integers as elements, we generate the group of positive rational numbers. The full set of rational numbers is not a group with multiplication as a combination rule, because division by 0 is excluded, hence 0 would be an element without an inverse. Likewise, the set of positive real numbers is a multiplication group, but the set of all real numbers is not.
Addition and multiplication are connected by the distributive law: (a+b)c=ac+bc. Some addition and multiplication groups are combined into a structure called a field, having two combination rules. Three number fields with an infinite number of elements are known, respectively having the rational, real, and complex numbers as elements. Because division by zero is excepted, I do not consider a field a character, but an interlacement of two characters.
For a given positive real number a all numbers an form a multiplication group, if the variable exponent n is an element of the set of integral, rational, real, or complex numbers. The character of this group depends on the fact that the integral, rational, real, or complex numbers each form an addition group. The combination of two elements of the power group, the product of two powers, arises from the addition of the exponents: an.am=a(n+m). The identity element of this multiplication group is a0=1 and the inverse of an is a- n. The group is isomorphic with the addition group of integral, rational, real, or complex numbers.
Each addition group, multiplication group, and power group is a character class. Their characters are primarily numerically qualified. They have no secondary foundation, and their tertiary disposition is to be found in many interlacements with spatial, kinetic, physical, and chemical characters (chapters 3-5).
Sometimes, a variable spatial, kinetic, or physical property or relation turns out to have the character of a group, isomorphic to a group of numbers. If that magnitude may be positive as well as negative (e.g., electric charge) this is an addition group. If only positive values are allowed (e.g., length or mass), it is a multiplication group. In other cases, the property or relation is projected on a vector group (e.g., velocity or force). Isomorphy is not trivial. Sometimes one has to be content with a weaker projection, called homomorphy. An example is Friedrich Mohs’ scale, indicating the relative hardness of minerals by numbers between 0 en 10: if A is harder than B, A gets a higher numeral. It makes no sense to add or to multiply these ordinal numbers.
If a property or relation is isomorphic to a group of numbers, it is called measurable.[15] Since antiquity, its importance is expressed in the name geometry for the science of space. The law expressing the measurability of a property or relation is called its metric. Measurable magnitudes isomorphic to a number group allow us to perform calculations, which is the basis of the mathematization of science.
Measurability is not trivial. A physical magnitude is only measurable if a physical combination procedure is available, which can be projected on a quantitative one. To establish whether this is the case requires experimental and theoretical research.
Relativity theory demonstrates that a kinematic or physical combination rule in a group cannot always be projected on addition or multiplication. In the case of one-dimensional motion, the combination rule for two velocities v and w is not v+w (as in classical kinematics), but (v+w)/(1+vw/c2), where c equals the speed of light in a vacuum. For small velocities, the numerator is about 1, and the classical formula is approximately retained. The meaning of this formula becomes clear by taking v or w equal to c: if w=c, the combination of v and w equals c. A combination of velocities smaller than that of light never yields a velocity exceeding the speed of light. The formula also expresses the fact that the speed of light has the same value with respect to each moving system. In the Lorentz-group the speed of light is the unit of speed (c=1), having the same value in all inertial frames (3.3). This, of course, was the starting point for the formula’s derivation. The elements of the group are all velocities which magnitude is at most the speed of light.
Vectors play an important part in mathematics and in physics. With all kinds of vectors, like position, displacement, velocity, force, and electric field strength, the numerical vector character is interlaced. Spatial, kinetic and physical vectors are isomorphic with number vectors. Besides, mathematics acknowledges tensors, matrices and other structures.
A number vector r=(x,y,z,…) is an ordered set of n real numbers, called the components of the vector. Number vectors are subject to laws for addition and subtraction, by applying these operations to the components apart. For example, the difference between two vectors is r2r1=(x2x1,y2y1,z2z1, …). The set of all number vectors with the same number of components is an addition group, the zero vector 0=(0,0,0, …) being its identity element. Each vector multiplied by a real number yields a new vector within the group. If c is an ordinary number, b=ca=c(a1,a2,a3, …)=(ca1,ca2,ca3, …). However, division by zero being excluded, this does not define a combination procedure for a group.
Besides the zero vector as the identity element, the set contains unit vectors. In a unit vector, one component is equal to 1, the others are equal to 0. Any vector can be written as a linear combination of the unit vectors: for each number vector, a=(a1,a2,a3, …)=a1(1,0,0, …)+a2(0,1,0, …)+a3(0,0,1, …)+ ... The set of unit vectors constitutes the base of the set of vectors. For number vectors, the base is unique, but in other cases, a group of vectors may have various bases. For spatial vectors, e.g., each co-ordinate system represents another base. With the help of functions, other orthonormal bases for number vectors can be constructed, as is practised in quantum physics.
The scalar product of two number vectors can be used to determine relations between vectors. The scalar product of the vectors a and b is: a.b=a1b1+a2b2+a3b3+… The square root of the scalar product of a vector with itself (a.a=a12+a22+a32+ …) determines the magnitude of a. Each component of the vector a is equal to its scalar product with the corresponding unit vector, e.g., a1=a.(1,0,0…). Analogous to the spatial case, this is called the projection of a on a unit vector. If the scalar product is zero we call the vectors orthogonal, anticipating the spatial property of mutually perpendicular vectors. For instance, the unit vectors are mutually orthogonal. This multiplication of vectors is not a combination rule for groups, because the product is not a vector. (The ‘vector product’ is an anti-symmetric tensor, having n2 components, of which ½(n-1)n components are independent. In a three-dimensional space, this yields exactly three independent components. Hence a vector product looks like a vector (only in three dimensions). However, it is a ‘pseudovector’. At perpendicular reflection, a real vector reverses its direction, whereas the direction of a pseudovector is not changed.)
Apart from being real, the components of number vectors may be rational or complex, or even functions of numbers. These anticipate spatial vectors representing relative positions. An important difference is that spatial vectors are in need of a co-ordinate system, with an arbitrary choice of origin and unit vectors (3.1). Hence, number vectors are not identical with spatial vectors determining positions or displacements. A fortiori, this applies to kinetic or physical vectors, representing velocities or forces. Rather, the character of number vectors has the disposition to become interlaced with the characters of spatial, kinetic, or physical vectors.
A special case is the set of complex numbers, two-component vectors with a specific arithmetic. Also known as c=a+bi, a complex number c=(a, b) is a two-dimensional number vector having real components a and b. The complex numbers for which b=0 have the same properties as real numbers, hence for convenience one writes a=(a,0). This makes the set of real numbers a subset of the set of complex numbers. The unit vectors are 1=(1,0) and i=(0,1), the imaginary unit. The complex numbers form an addition group.
The vector c*=(a, -b) is called the complex conjugate of c=(a, b). The magnitude of c is the square root of cc*=(a, b)(a, -b)=a2+b2 and is a real number. The complex numbers form an addition group with the combination rule: (a, b)+(c, d)=(a+c,b+d). The identity element is 0=(0,0), and -(a,b) = (-a,-b) is the inverse of (a,b).
Complex numbers have the unique property that their multiplication yields a complex number. This is not the case for other number vectors. The product of the complex numbers (a,b) and (c,d) is (a,b).(c,d) = (ac-bd,bc+ad), which is a complex number. Clearly, i2=(0,1)(0,1)=(-1,0)=-1. The inverse operation also gives a complex number, but division by zero being excluded, this does not result in a group. As observed, the set of complex numbers is a field, an interlacement of two characters, subject to two combination rules.
Unlike the real numbers, the complex numbers cannot be projected on a line in an unambiguous order of increasing magnitude, because different complex numbers may have the same magnitude. However, they can be projected on a two-dimensional ‘complex plane’. The addition group of complex numbers is isomorphic to the addition group of two-dimensional spatial vectors.
If we call w the angle with the positive real axis (for which a>0, b=0), then a complex number having magnitude c can be written as an imaginary exponential function c.exp.iw=c(cos w+i sin w). The product of two complex numbers is now cd.exp.i(v+w) and their quotient is (c/d).exp.i(v-w). In the complex plane, the unit circle around the origin represents the set of numbers exp.iw. Multiplication of a complex number with exp.iw corresponds with a rotation about the angle w.
Interesting is that some theorems about real numbers can only be proved by considering them a subset of the set of complex numbers. The characters of real and complex numbers are strongly interlaced.
Mathematical functions may also have a character, a specific set of laws. A function is a prescription, connecting a set of numbers [x] onto another set [y], such that to every x only one y corresponds, y=f(x). A function may depend on several variables, e.g. the components of a vector. A function is a relation between the elements of two or more sets, e.g., number sets. This relation is not always symmetrical. With each element of the first set [x] corresponds only one element of the second set [y]. Conversely, each element of [y] corresponds with zero, one or more elements of [x]. If the functional relation between [x] and [y] is symmetrical, the function is called one-to-one. This is important in particular in the case of a projection of a set onto itself. Sometimes such a projection is called a rotation. In a picture in which [x] is represented on the horizontal axis and [y] on the vertical axis, a graph represents the function spatially.
If the set [x] is finite, then [y] is finite as well and the prescription may be a table. More interesting are functions for which [x] is a non-denumerable set of real or complex numbers within a certain interval. A function may be continuous or discontinuous. An example of a discontinuous function is the stepfunction: y=0 if x<a and y=1 if x>a. Here [x] is the set of all real numbers, and [y] is a subset of this set. The derivative of the step function is the characteristic delta function. The delta function equals zero for all values of x, except for x = a. For x = a, the delta function is not defined. The integral of the delta function is 1. An approximate representation of the deltafunction is a rectangle having height h and width 1/h. If h increases indefinitely, 1/h decreases, but the integral (the rectangle’s area) is and remains equal to 1. The well-known Gauss-function approximates the deltafunction equally well.
Many a characteristic function defined by a specific lawful connection between two sets [y] and [x] has the disposition to be interlaced with spatial, kinetic, or physical characters. For instance, the quadratic function y=ax2+bx+c is interlaced with the spatial character of a parabola and with motion in free fall. Spatially defined, a parabola is a conic section. Of course, it can also be defined as the projection of the mentioned quadratic function. Contrary to laws, definitions are not very important.
The exponential function has the disposition to become interlaced with periodic motions and various physical processes. The exponential function with a real exponent (exp.at) indicates positive or negative growth. If it has an imaginary exponent, the exponential function (exp.iat) is periodical (i.e., exp.iat=exp.i(at+n.2pi) for each integral value of n), hence its character is interlaced with those of periodic motions like rotations, oscillations and waves.
Besides the above mentioned number vectors, mathematics knows of vectors which components are functions. Now a vector is an ordered set of n functions. (The dimension n may be finite or infinite, denumerable or non-denumerable). This is only possible if the scalar product f.y is defined, including the magnitude of f (the square root of f.f), and if an orthonormal base of n unit functions f1, f2, … exists. ‘Orthonormal’ means that fi.fj=dij: the scalar product of each pair of basis functions equals 1 if i=j, it equals 0 if i differs from j. A function is an element of a complete addition group of functions if it is a linear combination of a set of basic functions. An n-dimensional linear combination of n basis functions is: f=c1f1+c2f2+ … +cn fn. In a complex function set, the components c1, c2, c3, … are complex as well. The number of dimensions may be finite, denumerable infinite, or non-denumerable. In the latter case, the sum is an integral.
The basic functions being orthonormal, the group of functions is isomorphic with the group of number vectors having the same number of dimensions.
A function projects the elements of a number set onto another number set. Because many functions exist, sets of functions can be constructed. These too may be projected on each other, and such a projection is called an operator. Although the idea of an operator is developed and mostly applied in quantum physics, it is a mathematical concept. An operator A converses a function into another one, y(x)=Af(x). This has the profile of an event. Having a quantitative character, a transition made by an operator is interlaced with the character of events qualified by a later relation frame. A spatial operation may be a translation or a rotation. A change of state is an example of a physical event. Quantum physics projects a physical change of state on the mathematical transition of a function by means of an operator.
If the converted function is proportional to the original one (Af=af, such that a is a real number), we call f an eigenfunction (proper function) of A, and a the corresponding eigenvalue (proper value). Trivial examples are the identity operator, for which any function is an eigenfunction (the eigenvalue being 1); or the operator multiplying a function by a real number (being its eigenvalue).
An operation playing an important part in kinematics, physics and chemistry is differentiating a function. (The reverse operation is called integrating). By differentiating a function is converted into its derivative. In mechanics, the derivative of the position function indicates the velocity of a moving body. Its acceleration is found by calculating the derivative of the velocity function.
For the operator (d/dx), the real exponential function f=b.exp.ax is an eigenfunction, for (d/dx)f=ab.exp.ax=af. The eigenvalue is the exponent a. The imaginary exponential function y=b.exp.iat is an eigenfunction of the operator (1/i)(d/dt), in quantum physics called the Hamilton-operator or Hamiltonian (after William Rowan Hamilton). Again, the eigenvalue is the exponent a. If ABf = BAf for each f, A and B are called commutative. If two operators commute, they have the same eigenfunctions, but usually different eigenvalues.
Quantum physics calls a linear set of functions with complex components a Hilbert space. The quantum mechanical state space is called after David Hilbert, but was invented by John von Neumann, in 1927. This group is a representation of the ensemble of possible states of a physical system.
Consider an operator projecting a group onto itself. The operator A converts an element f of the group into another element Af of the same group. Such an operator is called linear if for all elements of the group A(f+y)=Af+Ay. If its eigenfunctions constitute an orthonormal basis for the group or a subgroup, the operator is called hermitean, after the mathematician Charles Hermite. The operation represented by a hermitean operator H is not a combination procedure for a group, but it projects a function on the eigenfunctions of H.
Besides hermitean operators, quantum physics applies unitary operators, which form a group representing the symmetry properties of a Hilbert space. To each operator A, an operator A+ is conjugated such that the scalar product y. Af = A+f. y. For a Hermitean operator H, H+ = H, hence f.Hy = Hf.y. For a unitary operator U, UU+ = I, the identity operator. Hence, Uy.Uf=f.y=(y.f)*. This means that the probability of a state or a transition, being determined by a scalar product, is invariant under a unitary operation. Unitary operators are especially fit to describe symmetries and invariances.
Encyclopaedia of relations and characters. 2. Sets
2.4. Ensemble and probability
In our daily life as well as in science, we experience a thing first of all as a unit having specific properties. We know that an atom has the spatially founded character of a nucleus surrounded by a cloud of electrons. However, we also know it as a unit with a specific mass and chemical properties. A character determines a class of similar things. There are many hydrogen atoms having the same characteristic properties, even if deploying individual differences.
The arithmetic of characteristically equal individuals has a specific application in statistics. Statistics makes sense if it concerns the mutual variations of similar individuals. Statistics is only applicable to a specific set of individuals, a subset of a character class, a sample representative for the ensemble of possible variations. Both theoretically and empirically, we can apply statistics to the casting of dice, supposing all dice to have the same cubic symmetry, and assuming that the casting procedure is arbitrary.
I call an ensemble the set of all possible variations allowed by a character. Just like other sets, an ensemble has subsets, and sometimes the measure of a subset represents the relative probability of the possibilities. The concept of probability makes only sense if it concerns possibilities that can be realized by some physical interaction. Therefore, probability is a mathematical concept anticipating the physical one. I shall present a short summary of the classical theory of probability.
I discuss probability only in an ontological context, not in the epistemological meaning of the probability of a statement. Ontologically, probability does not refer to a lack of knowledge, but to the variation allowed by a character.
Consider the subsets A, B, … of the non-empty ensemble E of possibilities. Now A and B is the union of A and B, the subset of all elements belonging to A, to B, or to both. The intersection A or B is the subset of all elements belonging to A as well as to B. If A or B = 0 (the empty set) we call A and disjunct, they have no elements in common. If A is a subset of B, then A and B and A or A. Clearly, the intersection of A with E is A: A orA.
Formally, probability is defined as a quantitative measure p(A) for any subset A of E. Observe that the theory ascribes a probability to the subsets, not to the elements of a set.
1. Probability is a non-negative measure: p(A) is larger than or equal to 0.
2. Probability is normalized: p(E)=1.
3. Probability is an additive function for disjunct subsets of E: if A or = 0, then p(A and B)=p(A)+p(B).
Starting from this definition, several theorems can be derived.
The conditional probability, the chance having A if B is given and if p(B) > 0, is defined as p(A/B)=p(A or B)/p(B). Because p(A)=p(A/E), each probability is conditional. If A and B exclude each other, being disjunct (A or = 0), the conditional probability is zero. Now p(A/B)=p(B/A)=0. If A is a subset of B then: p(A and B) = p(B); p(A or B) = p(A); p(A/B) = p(A)/p(B).
A and B are called statistically independent if p(A/B)=p(A) and p(B/A)=p(B). Then p(A or B)=p(A)p(B) – for statistically independent subsets the chance of the combination is the product of their separate chances. Mark the distinction between disjunct and statistically independent subsets. In the first case probabilities are added, in the second case multiplied.
If an ensemble consists of n mutually statistically independent subsets, it can be projected onto an n-dimensional space. For instance, the possible outcomes of casting two dice simultaneously are represented on a 6x6 diagram. Genetics calls this a Punnett-square, after Reginald Punnett (1905). If E is a square with unit area, p(A) is the area of a part of the square. Hence, so far the theory is not intrinsically a probability theory.
Finally, consider a set of disjunct subsets X of E, such that their sum equals E. Now the probability p(X) is a function over the subsets X of E. We call p(X) the probability distribution over the subsets X of E. Consider an arbitrary function y(X) defined on this set. The average value of the function, also called its expectation value, is the sum over all X of the product y(X)p(X), if the number of disjunct subsets is denumerable (otherwise it is the integral). In the form of a formula: <y(X)>=SE y(X)p(X). In this sum, the probability expresses the ‘weight’ of each subset X.
This is called the ensemble average of the property. In statistical mechanics, it is an interesting question of whether this average is equal to the time average for the same property for a single system during a long time interval. This so-called ergodic problem is only solved for some very special cases, sometimes with a positive, sometimes with a negative result.[16] Besides the average of a property, it is often important to know how sharply peaked its probability distribution is. The ‘standard deviation’, the average difference from the average, is a measure of this peak. This is defined as <y(X)-<y(X)>>.
The formal theory is applicable to specific cases only if the value p(A) can be theoretically or empirically established for the subsets A of E. Often this is only a posteriori possible by performing measurements with the help of a representative sample. Sometimes, symmetries allow of postulating an a priori probability distribution. Games of chance are the simplest, oldest, and best-known examples.
Although the above-summarized theory is not only relatively simple but almost universally valid as well, its application strongly depends on the situation. Quantum physics allows of interference of states, influencing probability in a way excluded by classical probability theory (4.3). With respect to thing-like characters, the laws constituting the character determine the probability of possible variations. Another important field of application is formed by aggregates, for instance studied by statistical mechanics. For systems in or near equilibrium impressive results have been achieved, but for non-equilibrium situations (hence, for events and processes), the application of probability turns out to be fraught with problems.
Based on the characteristic similarity of the individuals concerned, statistical research is of eminent importance in all sciences. It is a means to research the character of individuals whose similarity is recognized or conjectured. It is also a means to study the properties of a homogeneous aggregate containing a multitude of individuals of the same character.
As early as 1860, James Clerk Maxwell applied statistics to an ideal gas, consisting of N molecules, each having mass m, in a container with volume V.[17] He neglected the molecules’ dimensions and mutual interactions. The vector r gives the position of a molecule, and the vector v represents its velocity. Maxwell assumed the probability for positions, p1(r), to be independent of the probability for velocities, p2(v). This means: p(r,v)=p1(r)p2(v).
In equilibrium, the molecules are uniformly distributed over the available volume, hence the chance to find a molecule in a volume-element dr=dx.dy.dz equals p1(r)dr=1/Vdr. Observe that p1(r) as well as p2(v) is a probability density. Maxwell based the velocity distribution on two kinds of symmetry. First, he assumed that the direction of motion is isotropic. This means that p2(v) only depends on the magnitude of the molecular speed. In this case, mathematically it does not matter to replace the speed by its square, hence p2(v)=p2(vx2+vy2+vz2). Secondly, Maxwell assumed that the components of the velocity (vx,vy,vz) are statistically independent. Only the exponential function satisfies these two requirements: p2(v)=p2(vx2+vy2+vz2)=px(vx)py(vy)pz(vz)=a.exp.-½mb(vx2+vy2+vz2).
By calculating the pressure P exerted by the molecules on the walls of the container, and comparing the result with the law of Robert Boyle and Louis Gay-Lussac, Maxwell found that the exponent depends on temperature. From the law of Boyle and Gay-Lussac (PV=NkT, wherein T is the temperature and k is called Boltzmann’s constant), it follows that b=N/PV=1/kT. The value of a follows from normalisation, i.e., the requirement that the total probability equals 1. Only in the twentieth century, experiments confirmed Maxwell’s theoretical distribution function. The expression ½m(vx2+vy2+vz2) is recognizable as the kinetic energy of a molecule. The mean kinetic energy turns out to be equal to (3/2)kT. For all molecules together the energy is (3/2)NkT, hence, the specific heat is (3/2)Nk. This result was disputed in Maxwell’s days, but it was later experimentally confirmed for mono-atomic gases. When Maxwell published his theory, it was not generally accepted that most known gases (hydrogen, oxygen, or nitrogen) consist of bi-atomic molecules.[18] These gases have a different specific heat than mono-atomic gases like mercury vapour and the later discovered noble gases like helium and argon. Boltzmann explained this difference by observing that bi-atomic molecules have rotation and vibration kinetic energy besides translational kinetic energy. An exact explanation became available only after the development of quantum physics.
Ludwig Boltzmann generalized Maxwell’s distribution, by allowing other forms of energy besides kinetic energy. The Maxwell-Boltzmann distribution p(r,v)=p1(r)p2(v)=(a/V).exp.-E/kT turns out to be widely valid. The probabilities or relative occupation numbers of two-atomic, molecular, or nuclear states having energies E1 and E2 have a proportion according to the so-called Boltzmann-factor, determined by the difference between E1 and E2. The Boltzmann-factor is: p(E1)/p(E2)=(exp.-E1/kT)/(exp.-E2/kT)=exp.-(E1-E2)/kT. This means that a state having a high energy has a low probability.
The weakness of Maxwell’s theory was neglecting the mutual interaction of the molecules, for without interaction equilibrium cannot be reached. Boltzmann corrected this by assuming that the molecules collide continuously with each other, exchanging energy. He arrived at the same result.
Maxwell and Boltzmann considered one system consisting of a large number of molecules, whereas Josiah Gibbs studied an ensemble of a large number of similar systems. Assuming that all microstates are equally probable, the probability of a macrostate can be calculated by determining the number of corresponding microstates. The logarithm of this number is proportional to the entropy of a macrostate.
Clausius and Boltzmann aimed at reducing the irreversibility expressed by the second law of thermodynamics to the reversible laws of mechanics. In how far they succeeded is still a matter of dispute. Anyhow, it could not be done without taking recourse to probability laws.[19] Boltzmann demonstrated the equilibrium state of a gas to have a much larger probability than a non-equilibrium state. He assumed that any system moves from a state with a low probability to a state with a larger one as a matter of course. This means that the irreversibility of the realization of possibilities is presupposed. In quantum mechanics, the combination of reversible equations of motion with probability leads to irreversible processes as well.[20]
Both in classical and in quantum statistics a character as a set of laws determines the ensemble of possibilities and the distribution of probabilities. It allows of individuality, the subject side of a character. Positivist philosophers defined probability as the limit of a frequency in an unlimited sequence of individual cases.[21] Later Karl Popper defended the ‘propensity-interpretation’ of probability: we have to ‘… interpret these weights of the possibilities (or of the possible cases) as measures of the propensity, or tendency, of a possibility to realize itself upon repetition’.[22] A propensity is a physical disposition or tendency ‘… to bring about the possible state of affairs … to realize what is possible ... the relative strength of a tendency or propensity of this kind expresses itself in the relative frequency with which it succeeds in realizing the possibility in question.’[23] Besides subjectivist views, the frequency interpretation and the propensity interpretation, Lawrence Sklar calls ‘probability’ a theoretical term’[24] ‘… the meaning of probability attributions would be the rules of interference that take us upward from assertions about observed frequencies and proportions to assertions of probabilities over kinds in the world, and downward from such assertions about probabilities to expectations about frequencies and proportions in observed samples. These rules of “inverse” and “direct” inference are the fundamental components of theories of statistical inference.’[25] This comes close to my interpretation of probability determined by a character.
Positivists tried to reduce the concept of probability to the subject side. Of course, the empirical measurement of a probability often has the form of a frequency determination. Each law statement demands testing, and that is only possible by taking a sample. This hypothesis must be regarded ‘as a postulate which can be ultimately justified only by the correspondence between the conclusions which it permits and the regularities in the behaviour of actual systems which are empirically found.’[26] This applies to all suppositions founding calculations of probabilities, but this does not justify the elimination of the law-side from probability theory.
An example of a frequency definition of probability is found in the study of radioactivity. A radioactive atom decays independent of other atoms, even if they belong to the same sample. During the course of time, the initial number of radioactive atoms (No) in a sample decreases exponentially to Nt at time t. When at a time to, No radioactive atoms of the same kind are left in a sample, then the expected number of remaining atoms at time t equals: Nt=No exp.-(t-to)/t*, such that Nt/No=exp.-(t-to)/t*. The characteristic constant t* is proportional to the well-known half-life time. The law of decay is theoretically derivable from quantum field theory. This results in a slight deviation from the exponential function, too small to be experimentally verifiable,[27] Many scientists are content with this practical definition. However, a sample is a collection limited in time and space, it is not an ensemble of possibilities.
There are two limiting cases. In the one case, we extend the phenomenon of radioactivity to all similar atoms, increasing Noand Nt infinitely in order to get a theoretical ensemble. The ensemble has two possibilities, the initial state and the final state, and their distribution in the ensemble at time t after to can be calculated, namely as the proportion exp.-(t-to)/t* = [exp.-t/t*]/[exp.-to/t*]. In the other limiting case we take No=1. Now exp.-(t-to)/t* is the chance that a single atom decays after t-to sec. This quotient depends on a time difference, not on a temporal instant. As long as the atom remains in its initial state, the probability of decay to the final state is unchanged.
Both limiting cases are theoretical. An ensemble is no more experimentally determinable than an individual chance. Only a collection of atoms can be subjected to experimental research. It makes no sense to consider one limiting case to be more fundamental than the other one. The first case concerns the law side, the second case the subject-side of the same phenomenon of radioactivity.
Statistics is not only applicable for the investigation of the ensemble of possibilities of a character. If two characters are interlaced, their ensembles are related as well. Sometimes, a one-to-one relation between the elements of both ensembles exists. Now the realization of a possibility in one ensemble reduces the number of possibilities in the other ensemble to one. In other cases, several possibilities remain, with different probabilities.
Character interlacements are not always obvious. In a complex system, it is seldom easy to establish relations between structures, events and processes. Statistical research of correlations is a much applied expedient.
[1] Zermelo in 1908, quoted by Quine 1963, 4; Putnam 1975, chapter 2.
[2] Shapiro 1997, 98.
[3] Quine 1963, 107-116.
[4] Gödel 1962.
[5] Putnam 1975, xi; Shapiro 1997, 109-112; Brown 1999, 182-191.
[6] Barrow 1992, 137.
[7] Amundson 1994, 102-106.
[8] Brown 1999, 19, 22-23.
[9] Quine 1963, 101.
[10] Cassirer 1910, 49.
[11] Grünbaum 1968, 13.
[12] Quine 1963, chapter VI.
[13] The relation between the elements CA and BA is (CA)(BA)’ = (CA)(AB’) = CB’, the relation between C and B.
[14] Slater 1975, 60-62.
[15] Stafleu 1980, chapter 3.
[16] Tolman 1938, 65-70; Khinchin 1949, Ch. III; Reichenbach 1956, 78-81; Prigogine 1980, 33-42, 64-65; Sklar 1993, 164-194.
[17] Maxwell 1890, I, 377-409; Born 1949, 50ff; Achinstein 1991, 171-206.
[18] Stafleu 2019, 10.3.
[19] Bellone 1980, 91.
[20] Belinfante 1975, chapter 2.
[21] Von Mises 1939, 163-176, Reichenbach 1956, 96ff, Hempel 1965, 387, and initially Popper 1959, chapter VIII.
[22] Popper 1967, 32.
[23] Popper 1983, 286. See Settle 1974 discussing Popper’s views; Margenau 1950, chapter 13; Nagel 1939, 23; Sklar 1993, 90-127.
[24] Sklar 1993,102-108
[25] Sklar 1993, 103
[26] Tolman 1938, 59.
[27] Cartwright 1983, 118.
Encyclopaedia of relations and characters,
their evolution and history
Chapter 3
3.1. Spatial magnitudes and vectors
3.2. Character, transformation and symmetry of spatial figures
3.3. Non-Euclidean space-time in the theory of relativity
Encyclopaedia of relations and characters. 3. Symmetry
3.1. Spatial magnitudes and vectors
The second relation frame for characters concerns their spatial relations. In 1899, David Hilbert formulated his foundations of projective geometry as relations between points, straight lines and planes, without defining these. ‘Projective geometry’ is since the beginning of the nineteenth century developed as a generalization of Euclidean geometry. Gottlob Frege thought that Hilbert referred to known subjects, but Hilbert denied this. He was only concerned with the relations between things, leaving aside their nature. According to Paul Bernays, geometry is not concerned with the nature of things, but with ‘a system of conditions for what might be called a relational structure’.[1] Inevitably, the later emphasis on structures was influenced by structuralism, for instance by Nicolas Bourbaki, pseudonym for a group of mainly French mathematicians.[2]
Topological, projective, and affine geometries are no more metric than the theory of graphs. (A ‘graph’ is a two- or more-dimensional discrete set of points connected by line stretches.) This deals with spatial relations without considering the quantitative relation frame. I shall not discuss these non-metric geometries. The nineteenth- and twentieth-century views about metric spaces and mathematical structures turn out to be very important to modern physics.
This chapter is mainly concerned with the possibility to project a relation frame on a preceding one, and its relevance to characters. Section 3.1 discusses spatial magnitudes and vectors. The metric of space, being the law for the spatial relation frame, turns out to rest on symmetry properties. Symmetry plays an important part in the character and transformation of spatial figures that are the subject matter of section 3.2. Finally, section 3.3 deals with the metric of non-Euclidean kinetic space-time according to the theory of relativity.
Mathematics studies inter alia spatially qualified characters. Because these are interlaced with kinetic, physical, or biotic characters, spatial characters are equally important to science. This also applies to spatial relations concerning the position and posture of one figure with respect to another one. A characteristic point, like the centre of a circle or a triangle, represents the position of a figure objectively. The distance between these characteristic points objectifies the relative position of the circle and the triangle. It remains to stipulate the posture of the circle and the triangle, for instance with respect to the line connecting the two characteristic points. A co-ordinate system is an expedient to establish spatial positions by means of numbers.
Spatial relations are rendered quantitatively by means of magnitudes like distance, length, area, volume, and angle. These objective properties of spatial subjects and their relations refer directly (as a subject) to numerical laws and indirectly (as an object) to spatial laws.
Science and technology prefer to define magnitudes that satisfy quantitative laws.
This is not the case with all applications of numbers. Numbers of houses project a spatial order on a numerical one, but hardly allow of calculations. Lacking a metric, neither Friedrich Mohs’ scale for the hardness of minerals, nor Charles Richter’s scale for the strenghth of earthquakes leads to meaningful calculations.
If we want to make calculations with a spatial magnitude, we have to project it on a suitable set of numbers (integral, rational, or real), such that spatial operations are isomorphic to arithmetical operations like addition or multiplication. This is only possible if a metric is available, a law to find magnitudes and their combinations.
For many magnitudes, the isomorphic projection on a group turns out to be possible. For magnitudes having only positive values (e.g., length, area, or volume), a multiplication group is suitable. For magnitudes having both positive and negative values (e.g., position), a combined addition and multiplication group is feasible. For a continuously variable magnitude, this concerns a group of real numbers. For a digital magnitude like electric charge, the addition group of integers may be preferred. It would express the fact that charge is an integral multiple of the electron’s charge, functioning as a unit.
Every metric needs an arbitrarily chosen unit. Each magnitude has its own metric, but various metrics are interconnected. The metrics for area and volume are reducible to the metric for length. The metric for speed is composed from the metrics of length and time. Connected metrics form a metric system.
If a metric system is available, the government or the scientific community may decide to prescribe a metric to become a norm, for the benefit of technology, traffic, and commerce. Processing and communicating of experimental and theoretical results requires the use of a metric system.
A point has no dimensions and could have been considered a spatial object if extension were essential for spatial subjects. However, a relation frame is not characterized by any essence like continuous extension, but by laws for relations. Two points are spatially related by having a relative distance. The argument ‘a point has no extension, hence it is not a subject’ reminds of Aristotle and his adherents. They abhorred nothingness, including the vacuum and the number zero as a natural number. Roman numerals do not include a zero, and Europeans did not recognize it until the end of the Middle Ages. Galileo Galilei taught his Aristotelian contemporaries that there is no fundamental difference between a state of rest (the speed equals zero) and a state of motion (the speed is not zero).[3]
It is correct that the property length does not apply to a point, any more than area can be ascribed to a line, or volume to a triangle. The difference between two line segments is a segment having a certain length. The difference between two equal segments is a segment with zero length, but a zero segment is not a point. A line is a set having points as its elements, and each segment of the line is a subset. A subset with zero elements or only one element is still a subset, not an element. A segment has length, being zero if the segment contains only one point. A point has no length, not even zero length. Dimensionality implies that a part of a spatial figure has the same dimension as the figure itself. A three-dimensional figure has only three-dimensional parts. We can neither divide a line into points, nor a circle into its diameters. A spatial relation of a whole and its parts is not a subject-object relation, but a subject-subject relation. In a quantitative sense a triangle as well as a line segment is a set of points, and the side of a triangle is a subset of the triangle. But in a spatial sense, the side is not a part of the triangle.
Whether a point is a subject or an object depends on the nomic (nomos is Greek for law) context, on the laws we are considering. The relative position of the ends of a line segment determines in one context a subject-subject relation (to wit, the distance between two points), in another context a subject-object relation (the objective length of the segment). Likewise, the sides of a triangle, having length but not area, determine subjectively the triangle’s circumference, and objectively its area.
The sequence of numbers can be projected on a line, ordering its points numerically. To order all points on a line or line segment the natural, integral, or even rational numbers are not sufficient. It requires the complete set of real numbers (2.2). The spatial order of equivalence or co-existence presents itself to full advantage only in a more-dimensional space. In a three-dimensional space, all points in a plane perpendicular to the x-axis correspond simultaneously to a single point on that axis. With respect to the numerical order on the x-axis, these points are equivalent. To lay down the position of a point completely, we need several numbers (x,y,z,…) simultaneously, as many as the number of dimensions. Such an ordered set of numbers constitutes a number vector (2.3).
For the character of a spatial figure too, the number of dimensions is a dominant characteristic. The number of dimensions belongs to the laws constituting the character. A plane figure has length and width. A three-dimensional figure has length, width, and height as mutually independent measures. The character of a two-dimensional figure like a triangle may be interlaced with the character of a three-dimensional figure like a tetrahedron. Hence, dimensionality leads to a hierarchy of spatial figures. At the base of the hierarchy, we find one-dimensional spatial vectors.
Contrary to a number vector, a spatial vector is localized and oriented in a metrical space. Localization and orientation are spatial concepts, irreducible to numerical ones. A spatial vector marks the relative position of two points. By means of vectors, each point is connected to all other points in space. Vectors having one point in common form an addition group. After the choice of a unit of length, this group is isomorphic to the group of number vectors having the same dimension. Besides spatial addition, a scalar product is defined (2.3). In an Euclidean space, the scalar product of two vectors a and b equals a.b=ab cos α. Herein a is the square root of a.a. It equals the length of a whereas α is the angle between a and b. If two vectors are perpendicular to each other, their scalar product is zero. The group’s identity element is the vector with zero length. Its base is a set of orthonormal vectors, i.e., the mutually perpendicular unit vectors having a common origin. Each vector starting from that origin is a linear combination of the unit vectors. So far, there is not much difference with the number vectors.
However, whereas the base of a group of number vectors is rather unique, in a group of spatial vectors the base can be chosen arbitrarily. For instance, one can rotate a spatial base about the origin. It is both localized and oriented. The set of all bases with a common origin is a rotation group. The set of all bases having the same orientation but different origins is a translation group. It is isomorphic both to the addition group of spatial vectors having the same origin and to the addition group of number vectors.
Euclidean space is homogeneous (similar at all positions) and isotropic (similar in all directions). Combining spatial translations, rotations, reflections with respect to a line or a plane and inversions with respect to a point leads to the Euclidean group. It reflects the symmetry of Euclidean space. Symmetry points to a transformation keeping certain relations invariant.[4] At each operation of the Euclidean group, several quantities and relations remain invariant, for instance, the distance between two points, the angle between two lines, the shape and the area of a triangle, and the scalar product of two vectors.
Besides a relative position, a spatial vector represents a displacement, the result of a motion. This is a disposition, a tertiary characteristic of spatial vectors.
Each base in each point of space defines a co-ordinate system. In an Euclidean space, this is usually a Cartesian system of mutually perpendicular axes (named after, respectively, Euclid of Alexandria and René Descartes). Partly, the choice of the co-ordinate system is arbitrary. We are free to choose rectangular, oblique or polar axes. (Polar co-ordinates do not determine the position of a point by its projections on two or more axes, but by the distance r to the origin and by one or more angles. For example, think of the geographical determination of positions on the surface of the earth.)
If we have a reference system, we can replace it by translation, rotation, mirroring, or a combination of these. A co-ordinate system has to satisfy certain rules.
1. The number of axes and unit vectors equals the number of dimensions. With fewer co-ordinates, the system is underdetermined, with more it is overdetermined.
2. The unit vectors are mutually independent. Two vectors are mutually dependent if they have the same direction. An arbitrary vector is a linear combination of the unit vectors, and is said to depend on them. In two dimensions, a=(a1,a2)=a1(1,0)+a2(0,1).
3. Replacing a co-ordinate system should not affect the spatial relations between the subjects in the space. In particular the distance between two points should have the same value in all co-ordinate systems. This rule warrants the objectivity of the co-ordinate systems. In a co-ordinate transformation, a magnitude that remains equal to itself is called ‘invariant’. This applies e.g. to the magnitude of a vector and the angle between two vectors. ‘Covariant’ magnitudes change in analogy to the co-ordinates.
4. The choice of a unit of length is arbitrary, but should have the same value in all co-ordinate systems, as well as along all co-ordinate axes. That may seem obvious, but for a long time at sea, the units used for depth and height were different from those for horizontal dimensions and distances.
5. For calculating the distance between two points we need a law, called the spatial metric, see below.
6. The co-ordinate system should reflect the symmetry of the space. For an Euclidean space, a Cartesian co-ordinate system satisfies this requirement. Giving preference to one point, e.g. the source of an electric field, breaks the Euclidean symmetry. In that case, scientists often prefer a co-ordinate system that expresses the spherical symmetry of the field. In the presence of a homogeneous gravitational field, physicists usually choose one of the co-ordinate axes in the direction of the field. If the space is non-Euclidean, like the earth’s surface, a Cartesian co-ordinate system is quite useless.
The fact that we are free to choose a co-ordinate system has generated the assumption that this choice rests on a convention, an agreement to keep life simple.[5] However, both the fact that a group of co-ordinate systems reflects the symmetry of the space and the requirement of objectivity make clear that these rules are normative. It is not imperative to follow these rules, but we ought to choose a system that reflects spatial relations objectively.
The metric depends on the symmetry of space. In an Euclidean space, Pythagoras’ law determines the metric. If the co-ordinates of two points are given by (x1,y1,z1) and (x2,y2,z2), and if we call dx=x2x1 etc., then the distance is the square root of dr2=dx2+dy2+dz2. This is the Euclidean metric. Since the beginning of the nineteenth century, mathematics acknowledges non-Euclidean spaces as well. Non-Euclidean geometries were discovered independently by Nicolai Lobachevski (first publication, 1829-30), Janos Bolyai and Carl Friedrich Gauss, later supplemented by Felix Klein. Significant is to omit Euclides’ fifth postulate, corresponding to the axiom that one and only one line parallel to a given line can be drawn through a point outside that line. (Long before, it was known that on a sphere the Euclidean metric is only applicable to distances small compared with the sphere's radius.)
Preceded by Gauss, in 1854 Bernhard Riemann formulated the general metric for an infinitesimal small distance in a multidimensional space. Riemann’s metric is dr2=gxxdx2+gyydy2+gxydxdy+gyxdydx+… Mark the occurrence of mixed terms besides quadratic terms. In the Euclidean metric gxx=gyy=1, gxy=gyx=0, and Δx and Δy are not necessarily infinitesimal. According to Riemann, a multiply extended magnitude allows of various metric relations, meaning that the theorems of geometry cannot be reduced to quantitative ones.[6]
For a non-Euclidean space, the co-efficients in the metric depend on the position. If i and j indicate x or y, the gij’s, are components of a tensor. In the two-dimensional case gij is a second derivative (like d2r/dxdy). For a more-dimensional space it is a partial derivative, meaning that other variables remain constant. To calculate a finite displacement requires the application of integral calculus. The result depends on the choice of the path of integration. The distance between two points is the smallest value of these paths. On the surface of a sphere, the distance between two points corresponds to the path along a circle whose centre coincides with the centre of the sphere.
The metric is determined by the structure and eventually the symmetry of the space. This space has the disposition to be interlaced with the character of kinetic space or with the physical character of a field. A well-known example is the general theory of relativity, being the relativistic theory of the gravitational field. In the general theory of relativity, the co-efficients for the four-dimensional space-time manifold form a symmetrical tensor, i.e., gij=gji for each combination of i and j. Hence, among the sixteen components of the tensor ten are independent.
An electromagnetic field is also described by a tensor having sixteen components. Its symmetry demands that gij=-gji for each combination of i and j, hence the components of the quadratic terms are zero. This leaves six independent components, three for the electric vector and three for the magnetic pseudovector.
Gravity having a different symmetry than electromagnetism is related to the fact that mass is definitely positive and that gravity is an attractive force. In contrast, electric charge can be positive or negative and the electric force named after Charles-Augustin Coulomb may be attractive or repulsive. A positive charge attracts a negative one, two positive charges (as well as two negative charges) repel each other.
In general, a non-Euclidean space is less symmetrical than an Euclidean one having the same number of dimensions. Motion as well as physical interaction causes a break of symmetry in spatial relations.
Encyclopaedia of relations and characters. 3. Symmetry
3.2. Character, transformation and symmetry
of spatial figures
This section discusses the shape of a spatial figure as an elementary example of a character. A spatial character has both a primary and a secondary characteristic. The tertiary characteristic plays an increasingly complex part in the path of a specific motion, the shape of a crystal, the morphology of a plant or the body structure of an animal. Besides, even the simplest figures display a spatial interlacement of their characters.
A spatial figure has the profile of a thing-like subject. Its shape determines its character. Consider a simple plane triangle in an Euclidean space. In a non-Euclidean space two figures only have the same shape if they have the same magnitude as well.[7] Similarity (to be distinguished from congruence or displacement symmetry) is a characteristic of an Euclidean space. Many regular figures like squares or cubes only exist in an Euclidean space. The character of a triangle constitutes a set of widely different triangles, having different angles, linear dimensions, and relative positions. Because each triangle belonging to the character class is a possible triangle as well, the ensemble coincides with the character class. We distinguish this set easily from related sets of e.g., squares, ellipses, or pyramids. Clearly, the triangle’s character is primarily spatially characterized and secondarily quantitatively founded. Thirdly, a triangle has the disposition to have an objective function in a three- or more-dimensional figure.
A triangle is a two-dimensional spatial thing, directly subject to spatial laws. The triangle is bounded by its sides and angular points, which have no two-dimensional extension but determine the triangle’s objective magnitude. Quantitatively, we determine the triangle by the number of its angular points and sides, the magnitude of its angles, the length of its sides, and by its area.
With respect to the character of a triangle, its sides and angular points are objects, even if they are in another context subjects (3.1). Their character has the disposition to become interlaced with that of the triangle.
A triangle has a structure or character because its objective measures are bound, satisfying restricting laws or constraints. Partly this is a matter of definition, a triangle having three sides and three angular points. This definition is not entirely free, for a ‘biangle’ as a two-dimensional figure does not exist and a quadrangle may be considered a combination of two triangles. However, there are other lawlike relations not implied by the definition, for instance the law that the sum of the three triangles equals π, the sum of two right angles. This is a specific law, only valid for plane triangles.
A triangle is a whole with parts. As observed, the relation of a whole and its parts is not to be confused with a subject-object relation. It makes no sense to consider the sides and the angular points as parts of the triangle. With respect to a triangle, the whole-part relation has no structural meaning. In contrast, a polygon is a combination of triangles being parts of the polygon. Therefore, a polygon has not much more structure than it derives from its component triangles. The law that the sum of the angles of a polygon having n sides equals (n-2)π is reducible to the corresponding law for triangles.
Two individual triangles can be distinguished in three ways, by their relative position, their relative magnitude, and their different shape. I shall consider two mirrored triangles to be alike.
Relative position is not relevant for the character of a triangle. We could just as well consider its relative position with respect to a circle or to a point as to another triangle. Relative position is the universal spatial subject-subject relation. It allows of the identification of any individual subject. Often, the position of a triangle will be objectified, e.g. by specifying the positions of the angular points with respect to a co-ordinate system.
Next, triangles having the same shape can be distinguished by their magnitude. This leads to the secondary variation in the quantitative foundation of the character.
Finally, two triangles may have different shapes, one being equilateral, the other rectangular, for example. This leads to the primary variation in the spatial qualification of the triangle’s character. Triangles are spatially similar if they have equal angles. Their corresponding sides have an equal ratio, being proportional to the sinuses of the opposite angles.
For any polygon, the triangle can be considered the primitive form. It displays a primary spatial variability in its shape and a secondary quantitative variability in its magnitude. Another primitive form is the ellipse, with the circle as a specific variation.
There are irregular shapes as well, not subject to a specific law. These forms have a secondary variability in their quantitative foundation, but lack a lawlike primary variation regarding the qualifying relation frame.
Like two triangles can be different in three respects, a triangle can be changed in three ways: by displacement (translation, rotation, and/or mirroring); by making it larger or smaller; or by changing its shape, i.e., by transformation. A transformation means that the triangle becomes a triangle with different angles or it gets an entirely different shape. Displacement, enlargement or diminishment, and transformation are spatial expressions anticipating actual events.
An operator (2.3) describes a characteristic transformation, if co-ordinates and functions represent the position and the shape of the figure. The character of a spatial transformation preserving the shape of the figure is interlaced with the character of an operator having eigenfunctions and eigenvalues.
All displacements of a triangle in a plane form a group isomorphic to the addition group of two-dimensional vectors. All rotations, reflections and their combinations constitute groups as well. Enlargements of a given triangle form a group isomorphic to the multiplication group of positive real numbers. (A subgroup is isomorphic to the multiplication group of positive rational numbers).
A separate class of spatial figures is called symmetric, e.g., equilateral and isosceles triangles. Symmetry is a property related to a spatial transformation such that the figure remains the same in various respects. Without changing, an equilateral triangle can be reflected in three ways and rotated about two angles. An isosceles triangle has only one similar operation, reflection, and is therefore less symmetric. A circle is very symmetric, because an infinite number of rotations and reflections transform it into itself.
The theory of groups renders good services to the study of these symmetries (2.3). In 1872, Felix Klein in his ‘Erlangen Program’ pointed out the relevance of the theory of groups for geometry, considered to be the study of properties invariant under transformations.[8]
Consider the group consisting of only three elements, I, A and B, such that AB=I, AA=B, BB=A. This is very abstract and only becomes transparent if an interpretation of the elements is given. This could be the rotation symmetry of an equilateral triangle, A being an anti-clockwise rotation of π/3, B of 2π/3. The inverse is the same rotation clockwise. The combination AB is the rotation B followed by A, giving I, the identity. Clearly, the character of this group has the disposition of being interlaced with the character of the equilateral triangle. However, this triangle has more symmetry, such as reflections with respect to a perpendicular. This yields three more elements for the symmetry group, now consisting of six elements. The rotation group I, A, B is a subgroup, isomorphic to the group consisting of the numbers 0, 1 and 2 added modulo 3 (2.3). The group is not only interlaced with the character of an equilateral triangle, but with many other spatial figures having a threefold symmetry, as well as with the group of permutations of three objects. A permutation is a change in the order of a sequence; e.g., BAC is a permutation of ABC. A set of n objects allows of n! = 1.2.3…. n permutations.
In turn, the character of an equilateral triangle is interlaced with that of a regular tetrahedron. The symmetry group of this triangle is a subgroup of the symmetry group of the tetrahedron.
A group expresses spatial similarity as well. The combination procedure consists of the multiplication of all linear dimensions with the same positive real or rational number, leaving the shape invariant. The numerical multiplication group of either rational or real positive numbers is interlaced with a spatial multiplication group concerning the secondary foundation of figures.
The translation operator, representing a displacement by a vector, formally represented by T(a)r=r+a, is an element of various groups, e.g., the Euclidean group mentioned before. Solid-state physics applies translation groups to describe the regularity of crystals. This implies an interlacement of the quantitative character of a group with the spatial character of a lattice and with the physical character of a crystal. The translation group for this lattice is an addition group for spatial vectors. It is isomorphic to a discrete group of number vectors, which components are not real or rational but integral. The crystal’s character has the disposition to be interlaced with the kinetic wave character of the X-rays diffracted by the crystal. As a consequence, this kind of diffraction is only possible for a discrete set of wave lengths.
The question of whether figures and kinetic subjects are real usually receives a negative answer, even in Protestant philosophy: ‘No single real thing or event is typically qualified or founded in an original mathematical aspect.’[9] ‘If anything is to be actually real in the world of empirical existence, it must ultimately be founded in physical reality.’ ‘Existence is ordered so as to build on physical foundations.’[10] The view that only physical (material) things are real is a common form of physicalism or materialist naturalism.[11]
First, this is the view of natural experience, which appears to accept only tangible matters to be real. Nevertheless, without the help of any theory, everybody recognizes typical shapes like circles, triangles, or cubes. This applies to typical motions like walking, jumping, rolling, or gliding as well.
Second, reality is sometimes coupled to observability. Now shapes are very well observable, albeit that we always need a physical substrate for any actual observation. Moreover, it would be an impoverishment if we would restrict our experience to what is directly observable. Human imagination is capable of representing many things that are not directly observable. For instance, we are capable of interpreting drawings of two-dimensional figures as three-dimensional objects. Although a movie consists of a sequence of static pictures, we see people moving. We can even see things that have no material existence, like a rainbow.
Third, I observe that the view that shapes are not real is strongly influenced by Plato, Aristotle, and their medieval commentators. According to Plato, spatial forms are invisible, but more real than observable phenomena. In contrast, Aristotle held that forms determine the nature of the things, having a material basis as well. Moreover, the realization of an actual thing requires an operative cause. Hence, according to Aristotle, all actually existing things have a physical character.
In opposition, I maintain that in the cosmos everything is real that answers the laws of the cosmos. Then numbers, groups, spatial figures, and motions are no less real than atoms and stars.
But are these natural or cultural structures? It cannot be denied that the concept of a circle or a triangle is developed in the course of history, in human cultural activity. Yet I consider them to be natural characters, which existence humanity has discovered, like it discovered the characters of atoms and molecules.
Reality is a theoretical concept. It implies that the temporal horizon is much wider than the horizon of our individual experience, and in particular much wider than the horizon of natural experience. By scientific research, we enlarge our horizon, discovering characters that are hidden from natural experience. Nevertheless, such characters are no less real than those known to natural experience are.
We call the kinetic space for waves a medium (and sometimes a field), and we call the physical space for specific interactions a field. For the study of physical interactions, spatial symmetries are very important. For instance, in classical physics this is the case with respect to gravity (Newton’s law), the electrostatic force (Coulomb’s law) and the magnetostatic force. Each of these forces is subject to an ‘inverse square law’. This law expresses the isotropy of physical space. In all directions, the field is equally strong at equal distances from a point-like source, and the field strength is inversely proportional to the square of the distance. About 1830, Carl Friedrich Gauss developed a method allowing of calculating the field strength of combinations of point-like sources. He introduced the concept of ‘flux’ through a surface, popularly expressed, the number of field lines passing through the surface. (An infinitesimal surface is defined as a vector a by its magnitude and the direction perpendicular to the surface. The flux is the scalar product of a with the field strength E at the same location and is maximal if a is parallel to E, minimal if their directions are opposite. If a is perpendicular to E the flux is zero. For a finite surface one finds the flux by integration.)
Gauss proved that the flux through a closed surface around one or more point-like sources is proportional to the total strength of the sources, independent of the shape of that surface and the position of the sources. The proportionality factor depends on the force law and is different in the three mentioned cases. This symmetry property has some important consequences.
Outside the sphere, a homogeneous spherical charge or mass causes a field that is equal to that of a point-like source concentrated in the centre of the sphere. Within the sphere, the field is proportional to the distance from the centre. Starting from the centre, the field initially increases linearly, but outside the sphere, it decreases quadratically. For gravity, Isaac Newton had derived this result by other means.
For magnetic interaction, physicists find empirically that the flux through a closed surface is always zero. This means that within the surface there are as many positive as negative magnetic poles. Magnetism only occurs in the form of dipoles or multipoles. There is no law excluding the existence of magnetic monopoles, but experimental physics has never found them.
In the electrical case, the combination of Gauss’s law with the existence of conductors leads to the conclusion that in a conductor carrying no electric current the electric field is zero. All net charge is located on the surface and the resulting electric field outside the conductor is perpendicular to the surface. Therefore, inside a hollow conductor the electric field is zero, unless there is a net charge in the cavity. Experimentally, this has been tested with a large accuracy. Because this result depends on the inverse square law, it has been established that the exponent in Coulomb’s law differs less than 10-20 from 2. If there is a net charge in the cavity, there is as much charge (with reversed sign) on the inside surface of the conductor. It is distributed such that in the conductor itself the field is zero. If the net charge on the conductor is zero, the charge at the outside surface equals the charge in the cavity. By connecting it with the ‘earth’, the outside can be discharged. Now outside the conductor the electric field is zero, and the charge within the cavity is undetectable. Conversely, a space surrounded by a conductor is screened from external electric fields.
Gauss’s law depicts a purely spatial symmetry and is therefore only applicable in static or quasi-static situations. James Clerk Maxwell combined Gauss’s law for electricity and magnetism with André-Marie Ampère’s law and Michael Faraday’s law for changing fields. As a consequence, Maxwell found the laws for the electromagnetic field. These laws are not static, but relativistically covariant, as Albert Einstein established.
Spin is a well-known property of physical particles. It derives its name from the now as naive considered assumption that a particle spins around its axis. If the particle is subject to electromagnetic interaction, a magnetic moment accompanies the spin, even if the particle is not charged. A neutron has a magnetic moment, whereas a neutrino has not. Spin is an expression of the particle’s rotation symmetry, and is similar to the angular momentum of an electron in its orbit in an atom. A pion has zero spin and transforms under rotation like a scalar. The spin of a photon is 1 and it transforms like a vector. The hypothetical graviton’s spin is twice as large, behaving as a tensor at rotation. These particles, called bosons, have symmetrical wave functions. Having a half-integral spin (as is the case with, e.g., an electron or a proton), a fermion’s wave function is antisymmetric. It changes of sign after a rotation of 2pi (4.4). This phenomenon is unknown in classical physics, but plays an important part in quantum statistics.
Encyclopaedia of relations and characters. 3. Symmetry
3.3. Non-Euclidean space-time
in the theory of relativity
Until the end of the nineteenth century, motion was considered as change of place, with time as the independent variable. Isaac Newton thought space to be absolute, the expression of God’s omnipresence, a sensorium Dei.[12] Newton’s contemporaries Christiaan Huygens and Gottfried Wilhelm Leibniz were more impressed by the relativity of motion. They believed that anything only moves relative to something else, not relative to absolute space. As soon as Thomas Young, Augustin Fresnel and other physicists in the nineteenth century established that light is a moving wave, they started the search for the ether, the material medium for wave motion. They identified the ether with Newton’s absolute space, now without the speculative reference to God’s omnipresence. This search had little success, the models for the ether being inconsistent or contrary to observed facts. In 1865, James Clerk Maxwell formulated his electromagnetic theory, connecting magnetism with electricity, and interpreting light as an electromagnetic wave motion. Although Maxwell’s theory did not require the ether, he persisted in believing its existence. In 1905, Albert Einstein suggested to abandon the ether.[13] He did not prove that it does not exist, but showed it to be superfluous. Physicists intended the ether as a material substratum for electromagnetic waves. However, in Einstein’s theory it would not be able to interact with anything else. Consequently, the ether lost its physical meaning. (The cosmic electromagnetic background radiation discovered by Arno Penzias and Robert Wilson in 1964 may be considered to be an ether.)
Until Einstein, kinetic time and space were considered independent frames of reference. In 1905, Albert Einstein shook the world by proving that the kinetic order implies a relativization of the quantitative and spatial orders. Two events being synchronous according to one observer turn out to be diachronous according to an observer moving at high speed with respect to the former one. This relativizing is unheard of in the common conception of time, and it surprised both physicists and philosophers.
Einstein based the special theory of relativity on two postulates or requirements for the theory. The first postulate is the principle of relativity. It requires each natural law to be formulated in the same way with respect to each inertial frame of reference. The second postulate demands that light have the same speed in every inertial system. From these two axioms, Einstein could derive the mentioned relativization of the quantitative and spatial orders. He also showed that the units of length and of time depend on the choice of the reference system. Moving rulers are shorter and moving clocks are slower than resting ones. In the theory of Hendrik Lorentz and others, time dilation and space contraction were explained as molecular properties of matter. Einstein explained them as kinetic effects. Only the speed of light is in all reference systems the same, acting as a unit of motion. Indeed, relativity theory often represents velocities in proportion to the speed of light.
An inertial system is a system of reference in which Newton’s first law of motion, the law of inertia, is valid. Unless some unbalanced force is acting on it, a body moves with constant velocity (both in magnitude and in direction) with respect to an inertial system. This is a reference system for motions; hence, it includes clocks besides a spatial co-ordinate system. If we have one inertial system, we can find many others by shifting, rotating, reflecting, or inversing the spatial co-ordinates; or by moving the system at a constant speed; or by resetting the clock, as long as it displays kinetic time uniformly (4.1). These operations form a group, in classical physics called the Galileo group. Here time is treated as a variable parameter independent of the three-dimensional spatial co-ordinate system. Since Einstein proved this to be wrong, an inertial system is taken to be four-dimensional. The corresponding group of operations transforming one inertial system into another one is called the Lorentz group (sometimes called the Poincaré group, of which the Lorentz group (now without spatial and temporal translations) would be a subgroup.) The distinction between the classical Galileo group and the special relativistic Lorentz group concerns relatively moving systems. Both have an Euclidean subgroup of inertial systems not moving with respect to each other. The distinction concerns the combination of motions, objectified by velocities. Restricted to one direction, in the Galileo group velocities are combined by addition (v+w), in the Lorentz group by the formula (v+w)/(1+vw/c2), see section 2.3. The name ‘Galileo group’ dates from the twentieth century.
In a four-dimensional inertial system, a straight line represents a uniform motion. Each point on this line represents the position (x,y,z) of the moving subject at the time t. If the speed of light is the unit of velocity, a line at an angle of π/4 with respect to the t-axis represents the motion of a light signal. The relativistic metric concerns the spatio-temporal interval between two events. The metric of special relativity theory is ds2=dx2+dy2+dz2-dt2=dr2-dt2. There are no mixed terms, and the interval is not necessarily infinitesimal. This metric is pseudo-Euclidean because of the minus sign in front of dt2. If the speed of light is not taken as the unit of speed, this term becomes c2dt2 The metric can be made apparently Euclidean by considering time an imaginary co-ordinate: ds2=dx2+dy2+dz2+(idt)2. It is preferable to make visible that kinetic space is less symmetric than the Euclidean four-dimensional space, for lack of symmetry between the time axis and the three spatial axes. According to the formula, ds2 can be positive or negative, and Δs real or imaginary. Therefore, one defines the interval as the absolute value of Δs.
The combination rule in the Lorentz group is formulated such that the interval is invariant at each transformation of one inertial system into another one. Only then, the speed of light (the unit of motion) is equal in all inertial systems. A flash of light expands spherically at the same speed in all directions, in any inertial reference system in which this phenomenon is registered. This system is called the block universe or Hermann Minkowski’s space-time continuum.[14]
The magnitude of the interval is an objective representation of the relation between two events, combining a time difference with a spatial distance. For the same pair of events in another inertial system, both the time difference Δt and the spatial distance Δr may be different. Only the magnitude Δs of the interval is independent of the choice of the inertial system.
Whereas the Euclidean metric is always positive or zero, the pseudo-Euclidean metric, determining the interval between two events may be negative as well. For the motion of a light signal between two points, the interval is zero. For a light signal, Δs=0, for the covered distance Δr equals t. If Δr=0, the two events have the same position and the interval is a time difference (Δt). If Δt=0, the interval is a spatial distance (Δr) and the two events are simultaneous. In other cases, an interval is called space-like if the distance Δr>t, or time-like if the time difference Δtr/c (in absolute values). In the first case, light cannot bridge the distance within the mentioned time difference, in the second case it can.
For two events having a space-like interval, an inertial system exists such that the time difference is zero (Δt=0), hence the events are simultaneous. In another system, the time difference may be positive or negative. The distance between the two events is too large to be bridged even by a light signal, hence the two events cannot be causally related. Whether such a pair of events is diachronous or synchronous appears to depend on the choice of the inertial system.
Other pairs of events are diachronous in every inertial system, their interval being time-likes2<0). If in a given inertial system event A occurs before event B, this is the case in any other inertial system as well. Now A may be a cause of B, anticipating the physical relation frame. The causal relation is irreversible, the cause preceding the effect.[15]
The formula for the relativistic metric shows that space and time are not equivalent, as is often stated. By a rotation about the z-axis, the x-axis can be transformed into the y-axis. In contrast, no physically meaningful transformation exists from the t-axis into one of the spatial axes or conversely.
In the four-dimensional space-time continuum, the spatial and temporal co-ordinates form a vector. Other vectors are four-dimensional as well, often by combining a classical three-dimensional vector with a scalar. This is meaningful if the vector field has the same or a comparable symmetry as the space-time continuum. For instance, the linear momentum and the energy of a particle are combined into the four-dimensional momentum-energy vector (px,py,pz,E/c). Its magnitude (the square root of px2+py 2+pz2-E2/c2) has in all inertial systems the same value. The theory of relativity distinguishes invariant, covariant, and contravariant magnitudes, vectors etc.
An unexpected consequence of the symmetry of physical space and time is that the laws of conservation of energy, linear and angular momentum turn out to be derivable from the principle of relativity. Emmy Noether first showed this in 1915.[16] Because natural laws have the same symmetry as kinetic space, the conservation laws in classical mechanics differ from those in special relativity.
Considering the homogeneity and isotropy of a field-free space and the uniformity of kinetic time, theoretically the principle of relativity allows of two possibilities for the transformations of inertial systems.[17] According to the classical Galileo group, the metric for time is independent of the metric for space. The units of length and time are invariant under all transformations. The speed of light is different in relatively moving inertial systems. In the relativistic Lorentz group, the metrics for space and time are interwoven into the metric for the interval between two events. The units of length and time are not invariant under all transformations. Instead, the unit of velocity (the speed of light) is invariant under all transformations. On empirical grounds, the speed of light being the same in all inertial systems, physicists accept the second possibility. Not the Galileo group but the Lorentz group turns out to be interlaced with kinetic space-time.
According to the principle of relativity, the natural laws can be formulated independent of the choice of an inertial system. Albert Einstein called this a postulate, a demand imposed on a theory. In contrast, Mario Bunge calls it a norm, a ‘normative metanomological principle …’ constituting ‘…a necessary though insufficient condition for objectivity …’[18] I suggest that it rests on the irreducibility of physical interaction to spatial or kinetic relations. The principle of relativity is not merely a convention, an agreement to formulate natural laws as simple as possible. It is first of all a requirement of objectivity, to formulate the laws such that they have the same expression in every appropriate reference system.
Yet, physicists do not always stick to the principle of relativity. When standing on a revolving merry-go-round, anyone feels an outward centrifugal force. When trying to walk on the roundabout he or she experiences the force called after Gustave-Caspar Coriolis as well. These forces are not the physical cause of acceleration, but its effect. Both are inertial forces, only occurring in a reference system accelerating with respect to the inertial systems.
Although the centrifugal force and the Coriolis force do not exist with respect to inertial systems, they are real, being measurable and exerting influence. In particular, the earth is a rotating system. The centrifugal force causes the acceleration of a falling body to be larger at the poles than at the equator, partly directly, partly due to the flattening of the earth at the poles, another effect of the centrifugal force. The Coriolis force causes the rotation of the pendulum called after Léon Foucault, and it has a strong influence on the weather. The wind does not blow directly from a high- to a low-pressure area, but it is deflected by the Coriolis force to encircle such areas.
Another example of an inertial force occurs in a reference system having a constant acceleration with respect to inertial systems. This force experienced in an accelerating or braking lift or train is equal to the product of the acceleration and the mass of the subject on which the force is acting. It is a universal force, influencing the motion of all subjects that we wish to refer to the accelerated system of reference.
Often, physicists and philosophers point to that inertial force in order to argue that the choice of inertial systems is arbitrary and conventional. Only because of simplicity, we prefer inertial systems, because it is awkward to take into account these universal forces. A better reason to avoid such universal forces is that they do not represent subject-subject relations. Inertial forces do not satisfy Newton’s third law, the law of equal action and reaction, for an inertial force has no reaction.[19] The source of the force is not another subject. A Newtonian physicist would call such a force fictitious.[20] The use of inertial forces is only acceptable for practical reasons. For instance, this applies to weather forecasting, because the rotation of the earth strongly influences the weather.
Another hallmark of inertial forces is to be proportional to the mass of the subject on which they act. In fact, it does not concern a force but an acceleration, i.e., the acceleration of the reference system with respect to inertial systems. We interpret it as a force, according to Isaac Newton’s second law, but it does not satisfy his third law.
Gravity too happens to be proportional to the mass of the subject on which it acts. At any place, all freely falling subjects experience the same acceleration. Hence, gravity looks like an inertial force. This inspired Albert Einstein to develop the general theory of relativity, defining the metric of space and time such that gravity is eliminated. It leads to a curved space-time, having a strong curvature at places where - according to the classical view - the gravitational field is strong. Besides subjects having mass, massless things experience this field as well. Even light moves according to this metric, as confirmed by ingenious observations.
Yet, gravity is not an inertial force, because it satisfies Newton’s third law. Contrary to the centrifugal and Coriolis forces, gravity expresses a subject-subject relation. The presence of heavy matter determines the curvature of space-time. In classical physics, gravity was the prototype of a physical subject-subject relation. One of the unexpected results of Isaac Newton’s Principia was that the planets attract the sun, besides the sun attracting the planets. It undermined Newton’s Copernican view that the sun is at rest at the centre of the world.[21]
Einstein observed that a gravitational field in a classical inertial frame is equivalent with an accelerating reference system without gravity, like an earth satellite. The popular argument for this principle of equivalence is that locally one could not measure any difference.[22] I like to make four comments.
First, on a slightly larger scale the difference between a homogeneous acceleration and a non-homogeneous gravitational field is easily determined.[23] Even in an earth satellite, differential effects are measurable. Except for a homogeneous field, the principle of equivalence is only locally valid.[24]
Second, the curvature of space-time is determined by matter, hence it has a physical source. The gravity of the sun causes the deflection of starlight observed during a total eclipse. An inertial force lacks a physical source.
Third, in non-inertial systems of reference, the law of inertia is invalid. In contrast, the general theory of relativity maintains this law, taking into account the correct metric. A subject on which no force is acting – apart from gravity – moves uniformly with respect to the general relativistic metric. If considered from a classical inertial system, this means a curved and accelerated motion due to gravity. The general relativistic metric does not eliminate, but incorporates gravity.
Finally, in the general relativistic space-time, the speed of light remains the universal unit of velocity. Light moves along a ‘straight’ line (the shortest line according to Bernhard Riemann’s definition). Accelerating reference systems still give rise to inertial forces. This means that Einstein’s original intention to prove the equivalence of all moving reference systems has failed.
The metrics of special and general relativity theory presuppose that light moves at a constant speed everywhere. The empirically confirmed fact that light is subject to gravity necessitates an adaptation of the metric. In the general theory of relativity, kinetic space-time is less symmetric than in the special theory. Because gravity is quite weak compared to other interactions, this symmetry break is only observable at a large scale, at distances where other forces do not act or are neutralized. Where gravity can be neglected, the special theory of relativity is applicable.
The general relativistic space-time is not merely a kinetic, but foremost a physical manifold. The objection against the nineteenth-century ether was that it did not allow of interaction. This objection does not apply to the general relativistic space-time. This acts on matter and is determined by matter.[25]
The general theory of relativity presents models for the physical space-time, which models are testable. It leads to the insight that the physical cosmos is finite and expanding. It came into being about thirteen billions years ago, in a ‘big bang’. According to the standard model to be discussed in section 5.1, the fundamental forces initially formed a single universal interaction. Shortly after the big bang they fell apart by a symmetry break into the present electromagnetic, strong and weak nuclear interaction besides the even weaker gravity. Only then the characters to be discussed in the next two chapters were gradually realized in the astrophysical evolution of the universe.
[1] Shapiro 1997, 158; Torretti 1999, 408-410.
[2] Barrow 1992, 129-134; Shapiro 1997, chapter 5.
[3] Galileo 1632, 20-22.
[4] Van Fraassen 1989, 262.
[5] Grünbaum 1973, chapter 1; Sklar 1974, 88-146.
[6] Jammer 1954, 150-166; Sklar 1974, 13-54; Torretti 1999, 157.
[7] Torretti 1999, 149.
[8] Torretti 1999, 155.
[9] Dooyeweerd 1953-1958, III, 99.
[10] Hart 1984, 156, 263.
[11] Stafleu 2019, chapter 11.
[12] Stafleu 2019, 4.4.
[13]Einstein 1905.
[14]Minkowski 1908.
[15] Bunge 1967a, 206.
[16] Stafleu 2019, 6.6.
[17] Rindler 1969, 24, 51-53.
[18]Bunge 1967a, 213, 214.
[19] French 1965, 494.
[20]French 1965, 511.
[21]Newton 1687, 419.
[22]Bunge 1967a, 207-210.
[23] Rindler 1969, 19; Sklar 1974, 70.
[24]Bunge 1967a, 210-212.
[25] Rindler 1969, 242.
Encyclopaedia of relations and characters,
their evolution and history
Chapter 4
Periodic motion
4.1. Motion as a relation frame
4.2. The character of oscillations and waves
4.3. A wave packet as an aggregate
4.4. Symmetric and antisymmetric wave functions
Encyclopaedia of relations and characters. 4. Periodic motion
4.1. Motion as a relation frame
Chapter 4 investigates characters primarily qualified by kinetic relations. In ancient and medieval philosophy, local motion was considered a kind of change. Classical mechanics emphasized uniform and accelerated motion of unchanging matter. In modern physics, the periodic motion of oscillations and waves is the main theme. In living nature and technology, rhythms play an important part as well.
Twentieth-century physics is characterized by the theory of relativity (chapter 3), by the investigation of the structure of matter (chapter 5), and by quantum physics. The latter is dominated by the duality of waves and particles. Section 4.1 discusses the kinetic relation frame and section 4.2 the kinetic character of oscillations and waves. Section 4.3 deals with the character of a wave packet with its anticipations on physical interaction. Section 4.4 concerns the meaning of symmetrical and antisymmetrical wave functions for physical aggregates.
Kinetically qualified characters are founded in the quantitative or the spatial relation frame and are interlaced with physical characters. Like numbers and spatial forms, periodic motions take part in our daily experience. And like irrational numbers and non-Euclidean space, some aspects of periodic phenomena collide with common sense. Chapter 4 aims to demonstrate that a realistic interpretation of quantum physics is feasible and even preferable to the standard non-realistic interpretations. This requires insight in the phenomenon of character interlacement.
In section 1.2, I proposed relative motion to be the third general type of relations between individual things and processes. Kinetic time is subject to the kinetic order of uniformity and is expressed in the periodicity of mechanical or electric clocks. Before starting the investigation of kinetic characters, I discuss some general features of kinetic time.
Like the rational and real numbers, points on a continuous line are ordered, yet no point has a unique successor (2.2). One cannot say that a point A is directly succeeded by a point B, because there are infinitely many other points between A and B. Yet, a uniformly or accelerating moving subject passes the points of its path successively.[1] The succession of temporal moments cannot be reduced to quantitative and/or spatial relations. It presupposes the numerical order of earlier and later and the spatial order of simultaneity, being diachronic and synchronic aspects of kinetic time. Zeno of Elea recognized this long before the Christian era. Nevertheless, until the seventeenth century, motion was not recognized as an independent principle of explanation.[2] Later on, it was reinforced by Albert Einstein’s theory of relativity (3.3).
The uniformity of kinetic time seems to rest on a convention.[3] Sometimes it is even meaningful to construct a clock that is not uniform. For instance, the non-uniform physical order of radioactive decay is applied in the dating of archaeological and geological finds.[4] However, the uniformity of kinetic time together with the periodicity of many kinds of natural motion yields a kinetic norm for clocks. A norm is more than a mere agreement or convention. If applied by human beings constructing clocks, the law of inertia becomes a norm. A clock does not function properly if it represents a uniform motion as non-uniform.
With increasing clarity, the law of inertia was formulated by Galileo Galilei, René Descartes and others, finding its ultimate form in Isaac Newton’s first law of motion: ‘Every body continues in its state of rest, or of uniform motion in a right line, unless it is compelled to change that state by forces impressed upon it.’[5]
Inertial motion is not in need of a physical cause. Classical and modern physics consider inertial motion to be a state, not a change. In this respect, modern kinematics differs from Aristotle’s, who assumed that each change needs a cause, including local motion. Contrary to Aristotle (being the philosopher of common sense), the seventeenth-century physicists considered friction to be a force. Friction causes an actually moving subject to decelerate. In order to maintain a constant speed, another force is needed to compensate for friction. Aristotelians did not recognize friction as a force and interpreted the compensating force as the cause of uniform motion.
Uniformity of motion means that the subject covers equal distances in equal times. But how do we know which times are equal? The diachronous order of earlier and later allows of counting hours, days, months, and years. These units do not necessarily have a fixed duration. In fact, months are not equal to each other, and a leap year has an extra day. Until the end of the Middle Ages, an hour was not defined as 1/24th of a complete day, but as the 1/12th part of a day taken from sunrise to sunset. A day in winter being shorter than in summer, the duration of an hour varied with the seasons. Only after the introduction of mechanical clocks in the fifteenth century, it became customary to relate the length of an hour to the period from noon to noon, such that all hours are equal.
Mechanical clocks measure kinetic time. Time as measured by a clock is called uniform if the clock correctly shows that a subject on which no net force is acting moves uniformly.[6] This appears to be circular reasoning. On the one side, the uniformity of motion means equal distances in equal times. On the other hand, the equality of temporal intervals is determined by a clock subject to the norm that it represents uniform motion correctly.[7] This circularity is unavoidable, meaning that the uniformity of kinetic time is an unprovable axiom. However, this axiom is not a convention, but an expression of a fundamental and irreducible law.
The uniformity of time is sometimes derived from a ceteris paribus argument. If one repeats a process at different moments under exactly equal circumstances, there is no reason to suppose that the process would proceed differently. In particular the duration should be the same. This reasoning is applicable to periodic motions, like in clocks. But it betrays a deterministic vision and is not applicable to stochastic processes like radioactivity. Albert Einstein observed that the equality of covered distances provides a problem as well, because spatial relations are subject to the order of simultaneity, dependent on the state of motion of the clocks used for measuring uniform motion.
Uniformity is a law for kinetic time, not an intrinsic property of time. There is nothing like a stream of time, flowing independently of the rest of reality. Positivist philosophers denied the ontological status of uniform time. Mach states emphatically: ‘The question of whether a motion is uniform in itself has no meaning at all. No more can we speak of an “absolute time” (independent of any change).’ [8] In my view, the law of inertia determines the meaning of the uniformity of time. According to Hans Reichenbach, it is an ‘empirical fact’ that different definitions give rise to the same ‘measure of the flow of time’: natural, mechanical, electronic or atomic clocks, the laws of mechanics, and the fact that the speed of light is the same for all observers.[9] ‘It is obvious, of course, that this method does not enable us to discover a “true” time, but that astronomers simply determine with the aid of the laws of mechanics that particular flow of time which the laws of physics implicitly define.’[10] However, if ‘truth’ means law conformity, ‘true time’ is the time subject to natural laws. It seems justified to generalize Reichenbach’s ‘empirical fact’, to become the law concerning the uniformity of kinetic time. Rudolf Carnap poses that the choice of the metric of time rests on simplicity: the formulation of natural laws is simplest if one sticks to this convention.[11] But then it is quite remarkable that so many widely different systems confirm to this human agreement. More relevant is to observe that physicists are able to explain all kinds of periodic motions and processes based on laws that presuppose the uniformity of kinetic time. Such an explanation is completely lacking with respect to any alternative metric invented by philosophers. Time only exists in relations between events. The uniformity of kinetic time expressed by the law of inertia asserts the existence of motions being uniform with respect to each other.
Both classical and relativistic mechanics use this law to introduce inertial systems. An inertial system is a spatio-temporal reference system in which the law of inertia is valid. It can be used to measure accelerated motions as well. Starting with one inertial system, all others can be constructed by using either the Galileo group or the Lorentz group, reflecting the relativity of motion (3.3). Both start from the axiom that kinetic time is uniform.
The law of uniformity concerns all dimensions of kinetic space. Therefore, it is possible to project kinetic time on a linear scale, irrespective of the number of dimensions of kinetic space. Equally interesting is that kinetic time can be projected on a circular scale, as displayed on a traditional clock. The possibility of establishing the equality of temporal intervals is actualized in uniform circular motion, in oscillations, waves, and other periodic processes. Therefore, besides the general aspect of uniformity, the time measured by clocks has a characteristic component as well, the periodic character of any clock. Mechanical clocks depend on the regularity of a pendulum or a balance. Electronic clocks apply the periodicity of oscillations in a quartz crystal. Periodicity has always been used for the measurement of time. The days, months, and years refer to periodic motions of celestial bodies. The modern definition of the second depends on atomic oscillations. The periodic character of clocks allows of digitalizing kinetic time, each cycle being a unit, whereas the cycles are countable.
The uniformity of kinetic time as a universal law for kinetic relations and the periodicity of all kinds of periodic processes reinforce each other. Without uniformity, periodicity cannot be understood, and vice versa.
The positivist idea that the uniformity of kinetic time is no more than a convention has the rather absurd consequence, that the periodicity of oscillations, waves and other natural rhythms would be a convention as well.
Encyclopaedia of relations and characters. 4. Periodic motion
4.2. The character of oscillations and waves
Periodicity is the distinguishing mark of each primary kinetic character with a tertiary physical characteristic. The motion of a mechanical pendulum, for instance, is primarily characterized by its periodicity and tertiarily by gravitational acceleration. For such an oscillation, the period is constant if the metric for kinetic time is subject to the law of inertia. This follows from an analysis of pendulum motion. The character of a pendulum is applied in a clock. The dissipation of energy by friction is compensated such that the clock is periodic within a specified margin.
Kepler’s laws determine the character of periodic planetary motion. Strictly speaking, these laws only apply to a system consisting of two subjects, a star with one planet or binary stars. Both Isaac Newton’s law of gravity and the general theory of relativity allow of a more refined analysis. Hence, the periodic motions of the earth and other systems cannot be considered completely apart from physical interactions. However, in this section I shall abstract from physical interaction in order to concentrate on the primary and secondary characteristics of periodic motion.
The simplest case of a periodic motion appears to be uniform circular motion. Its velocity has a constant magnitude whereas its direction changes constantly. Ancient and medieval philosophy considered uniform circular motion to be the most perfect, only applicable to celestial bodies. Seventeenth-century classical mechanics discovered uniform rectilinear motion to be more fundamental, the velocity being constant in direction as well as in magnitude. Christiaan Huygens assumed that the outward centrifugal acceleration is an effect of circular motion. Robert Hooke and Isaac Newton demonstrated the inward centripetal acceleration to be the cause needed to maintain a uniform circular motion. This force should be specified for any instance of uniform circular motion.
Not moving itself, the circular path of motion is simultaneously a kinetic object and a spatial subject. The position of the centre and the magnitude and direction of the circle’s radius vector determine the spatial position of the moving subject on its path. The radius is connected to magnitudes like orbital or angular speed, acceleration, period and phase. The phase (φ) indicates a moment in the periodic motion, the kinetic time (t) in proportion to the period (T): φ=t/T=ft modulo 1. If considered an angle, φ=2πft modulo 2π. A phase difference of ¼ between two oscillations means that one oscillation reaches its maximum when the other passes its central position.
These quantitative properties allow of calculations and an objective representation of motion.
A uniform circular motion can be constructed as a composition of two mutually perpendicular linear harmonic motions, having the same period and amplitude and a phase difference of one quarter. But then circular uniform motion turns out to be merely a single instance of a large class of two-dimensional harmonic motions. A similar composition of two harmonics – having the same period but different amplitudes or a phase difference other than one quarter – does not produce a circle but an ellipse. If the force is inversely proportional to the square of the distance (like the gravitational force of the sun exerted on a planet), the result is a periodic elliptic motion as well, but this one cannot be constructed as a combination of only two harmonic oscillations. Observe that an ellipse can be defined primarily (spatially) as a conic section, secondarily (quantitatively) by means of a quadratic equation between the co-ordinates [e.g., (x-x0)2/a2+(y-y0)2/b2=1], and tertiarily as a path of motion, either kinetically as a combination of periodic oscillations or physically as a planetary orbit.
We can also make a composition of two mutually perpendicular oscillations with different periods. Now according to Jules Lissajous, this constitutes a closed curve if and only if the two periods have a harmonic ratio, i.e., a rational number. If the proportion is an octave (1:2), then the resulting figure is a lemniscate (a figure eight). The Lissajous figures derive their specific regularity from periodic motions. Clearly, the two-dimensional Lissajous motions constitute a kinetic character. This character has a primary rational variation in the harmonic ratio of the composing oscillations, as well as a secondary variation in frequency, amplitude and phase. It is interlaced with the character of linear harmonic motion and several other characters. The structure of the path like the circle or the lemniscate is primarily spatially and secondarily quantitatively founded. A symmetry group is interlaced with the character of each Lissajous-figure, the circle being the most symmetrical of all.
In all mentioned characters, we find a typical subject-object relation determining an ensemble of possible variations. In the structure of the circle, the circumference has a fixed proportion to the diameter. This allows of an unbounded variation in diameter. In the character of the harmonic motion, we find the period (or its inverse, the frequency) as a typical magnitude, allowing of an unlimited variability in period as well as a bounded variation of phase. Varying the typical harmonic ratio results in an infinite but denumerable ensemble of Lissajous-figures.
A linear harmonic oscillation is quantitatively represented by a harmonic function. This is a sine or cosine function or a complex exponential function, being a solution of a differential equation. This equation, the law for harmonic motion, states that the acceleration a is proportional to the distance x of the subject to the centre of oscillation x0, according to: a=d2x/dt2=-(2πf)2(x-x0) wherein the frequency f=1/T is the inverse of the period T. The minus sign means that the acceleration is always directed to the centre.
This equation, the law for harmonic motion, concerns mechanical or electronic oscillations, for instance. Primarily, a harmonic oscillation has a specific kinetic character. It is a special kind of motion, characterized by its law and its period. An oscillation is secondarily characterized by magnitudes like its amplitude and phase, not determined by the law but by accidental initial conditions. Hence, the character of an oscillation is kinetically qualified and quantitatively founded.
The harmonic oscillation can be considered the basic form of any periodic motion, including the two-dimensional periodic motions discussed above. In 1822, Joseph Fourier demonstrated that each periodic function is the sum or integral of a finite or infinite number of harmonic functions. The decomposition of a non-harmonic periodic function into harmonics is called Fourier analysis.
A harmonic oscillator has a single natural frequency determined by some specific properties of the system. This applies, for instance, to the length of a pendulum; or to the mass of a subject suspended from a spring together with its spring constant; or to the capacity and the inductance in an electric oscillator consisting of a capacitor and a coil. This means that the kinetic character of a harmonic oscillation is interlaced with the physical character of an electric artefact.
Accounting for energy dissipation by adding a velocity-dependent term leads to the equation for a damped oscillator. Now the initial amplitude decreases exponentially. In the equation for a forced oscillation, an additional acceleration accounts for the action of an external periodic force. In the case of resonance, the response is maximal. Now the frequency of the driving force is approximately equal to the natural frequency. Applying a periodic force, pulse or signal to an unknown system and measuring its response is a widely used method of finding the system’s natural frequency, revealing its characteristic properties.
An oscillation moving in space is called a wave. It has primarily a kinetic character, but contrary to an oscillation it is secondarily founded in the spatial relation frame. Whereas the source of the wave determines its period, the velocity of the wave, its wavelength and its wave number express the character of the wave itself.
In an isotropic medium, the wavelength λ is the distance covered by a wave with wave velocity v in a time equal to the period T: λ=νT=ν/f. The inverse of the wavelength is the wave number (the number of waves per metre), σ=1/λ=f/ν. In three dimensions, the wave number is replaced by the wave vector k, which besides the number of waves per metre also indicates the direction of the wave motion. In a non-isotropic medium, the wave velocity depends on the direction. The wave velocity has a characteristic value independent of the motion of the source. It is a property of the medium, the kinetic space of a wave that specifically differs from the general kinetic space as described by the Galileo or Lorentz group.
Usually, the wave velocity depends on the frequency as well. This phenomenon is called dispersion. Only light moving in a vacuum is free of dispersion. (The medium of light in vacuum is the electromagnetic field.) The observed frequency of a source depends on the relative motions of source, observer and medium. This is the effect called after Christian Doppler.
A wave has a variability expressed by its frequency, phase, amplitude, and polarization. Polarization concerns the direction of oscillation. A sound wave in air is longitudinal, the direction of oscillation being parallel to the direction of motion. Light is transversal, the direction of oscillation being perpendicular to the direction of motion. Light is called unpolarized if it contains waves having all directions of polarization. Light may be partly or completely polarized. It may be linearly polarized (having a permanent direction of oscillation) or circularly polarized (the direction of oscillation itself rotating at a frequency independent of the frequency of the wave itself).
During the motion, the wave’s amplitude may decrease. For instance, in a spherical wave the amplitude decreases in proportion to the distance from the centre.
Waves do not interact with each other, but are subject to superposition. This is a combination of waves taking into account amplitude as well as phase. Superposition occurs when two waves are crossing each other. Afterwards each wave proceeds as if the other had been absent. Interference is a special case of superposition. Now the waves concerned have exactly the same frequency as well as a fixed phase relation. If the phases are equal, interference means an increase of the net amplitude. If the phases are opposite, interference may result in the mutual extinction of the waves.
Just like an oscillation, each wave has a tertiary, usually physical disposition. This explains why waves and oscillations give a technical impression, because technology opens dispositions. During the seventeenth century, the periodic character of sound was discovered in musical instruments. The relevance of oscillations and waves in nature was only fully realized at the beginning of the nineteenth century. This happened after Thomas Young and Augustin Fresnel brought about a break-through in optics by discovering the wave character of light in quite technical experiments. Since the end of the same century, oscillations and waves dominate communication and information technology.
It will be clear that the characters of waves and oscillations are interlaced with each other. A sound wave is caused by a loudspeaker and strikes a microphone. Such an event has a physical character and can only occur if a number of physical conditions are satisfied. However, there is a kinetic condition as well. The frequency of the wave must be adapted to the oscillation frequency of the source or the detector. The wave and the oscillating system are correlated. This correlation concerns the property they have in common, i.e., their periodicity, their primary kinetic qualification.
Sometimes an oscillation and a wave are directly interlaced, for instance in a violin string. Here the oscillation corresponds to a standing wave, the result of interfering waves moving forward and backward between the two ends. The length of the string determines directly the wavelength and indirectly the frequency, dependent on the string’s physical properties determining the wave velocity. Amplified by a sound box, this oscillation is the source of a sound wave in the surrounding air having the same frequency. In fact, all musical instruments perform according to this principle. The wave is always spatially determined by its wavelength. The length of the string fixes the fundamental tone (the keynote or first harmonic) and its overtones. The frequency of an overtone is an integral number times the frequency of the first harmonic.
A wave equation represents the law for a wave, and a real or complex wave function represents an individual wave. Whereas the equation for oscillations only contains derivatives with respect to time, the wave equation also involves differentiation with respect to spatial co-ordinates. Usually a linear wave equation provides a good approximation for a wave, for example, the equations for the propagation of light. Edwin Schrödinger’s non-relativistic equation and Paul Dirac’s relativistic equation describe the motion of material waves.
If j and f are solutions of a linear wave equation, then aj+bf is a solution as well, for each pair of real (or complex) numbers a and b. Hence, a linear wave equation has an infinite number of solutions, an ensemble of possibilities. Whereas the equation for an oscillation determines its frequency, a wave equation allows of a broad spectrum of frequencies. The source determines the frequency, the initial amplitude and the phase. The medium determines the wave velocity, the wavelength and the decrease of the amplitude when the wave proceeds away from the source.
Events having their origin in relative motions may be characteristic or not. A solar or lunar eclipse depends on the relative motions of sun, moon and earth. It is accidental and probably unique that the moon and the sun are equally large as seen from the earth, such that the moon is able to cover the sun precisely. Such an event does not correspond to a character. However, wave motion gives rise to several characteristic events satisfying specific laws.
Willebrord Snell’s and David Brewster’s laws for the refraction and reflection of light at the boundary of two media only depend on the ratio of the wave velocities, the index of refraction. Because this index depends on the frequency, light passing a boundary usually displays dispersion, like in a prism. Dispersion gives rise to various special natural phenomena like a rainbow or a halo, or artificial ones, like Isaac Newton’s rings.
If the boundary or the medium has a periodic character like the wave itself, a special form of reflection or refraction occurs if the wavelength fits the periodicity of the lattice. In optical technology, diffraction and reflection gratings are widely applied. Each crystal lattice forms a natural three-dimensional grating for X-rays, if their wavelength corresponds to the periodicity of the crystal lattice according to Lawrence and William Bragg’s law.
These are characteristic kinetic phenomena, not because they lack a physical aspect, but because they can be explained satisfactorily by a kinetic theory of wave motion.
Encyclopaedia of relations and characters. 4. Periodic motion
4.3. A wave packet as an aggregate
Many sounds are signals. A signal being a pattern of oscillations moves as an aggregate of waves from the source to the detector. This motion has a physical aspect as well, for the transfer of a signal requires energy. But the message is written in the oscillation pattern, being a signal if a human or an animal receives and recognizes it.
A signal composed from a set of periodic waves is called a wave packet. Although a wave packet is a kinetic subject, it achieves its foremost meaning if considered interlaced with a physical subject having a wave-particle character. The wave-particle duality has turned out to be equally fundamental and controversial. Neither experiments nor theories leave room for doubt about the existence of the wave-particle duality. However, it seems to contradict common sense, and its interpretation is the object of hot debates.
René Descartes and Christiaan Huygens assumed that space is completely filled up with matter, that space and matter coincide. They considered light to be a succession of mechanical pulses in space. Descartes believed that light does not move, but has a tendency to move. Huygens denied that wave motion is periodical.[12] From the fact that planets move without friction, Isaac Newton inferred that interplanetary space is empty. He supposed that light consists of a stream of particles. In order to explain interference phenomena like the rings named after him, he ascribed the light particles (or the medium) properties that we now consider to apply to waves.[13]
Between 1800 and 1825, Thomas Young in England and Augustin Fresnel in France developed the wave theory of light. Common sense dictated waves and particles to exclude each other, meaning that light is either one or the other. When the wave theory turned out to explain more phenomena than the particle model, the battle was over.[14] Decisive was Léon Foucault’s experimental confirmation in 1854 of the wave-theoretical prediction that light has a lower speed in water than in air. Isaac Newton’s particle theory predicted the converse. Light is wave motion, as was later confirmed by James Clerk Maxwell’s theory of electromagnetism. Nobody realized that this conclusion was a non sequitur. At most, it could be said that light has wave properties, as follows from the interference experiments of Young and Fresnel, and that Newton’s particle theory of light was refuted.[15]
Nineteenth-century physics discovered and investigated many other rays. Some looked like light, such as infrared and ultraviolet radiation (about 1800), radio waves (1887), X-rays and gamma rays (1895-96). These turned out to be electromagnetic waves. Other rays consist of particles. Electrons were discovered in cathode rays (1897), in the photoelectric effect and in beta-radioactivity. Canal rays consist of ions and alpha rays of helium nuclei. Cathode rays, canal rays and X-rays are generated in a cathode tube, a forerunner of our television tube, fluorescent lamp and computer screen.
At the end of the nineteenth century, this gave rise to a rather neat and rationally satisfactory worldview. Nature consists partly of particles, for the other part of waves, or of fields in which waves are moving. This dualistic worldview assumes that something is either a particle or a wave, but never both, tertium non datur.
It makes sense to distinguish a dualism, a partition of the world into two compartments, from a duality, a two-sidedness. The dualism of waves and particles rested on common sense, one could not imagine an alternative. However, twentieth-century physics had to abandon this dualism perforce and to replace it by the wave-particle duality. All elementary things have both a wave and a particle character.
Almost in passing, another phenomenon, called quantization, made its appearance. It turned out that some magnitudes are not continuously variable. The mass of an atom can only have a certain value. Atoms emit light at sharply defined frequencies. Electric charge is an integral multiple of the elementary charge. In 1905 Albert Einstein suggested that light consists of quanta of energy. Einstein never had problems with the duality of waves and particles, but he rejected its probability interpretation.[16] In Niels Bohr’s atomic theory (1913), the angular momentum of an electron in its atomic orbit is an integer times Max Planck’s reduced constant.[17] (Planck’s reduced constant is h/2π. In Bohr’s theory the angular momentum L=nh/2π, n being the orbit’s number. For the hydrogen atom, the corresponding energy is En=E1/n2, with E1=-13.6 eV, the energy of the first orbit.)
Until Erwin Schrödinger and Werner Heisenberg in 1926 introduced modern quantum mechanics, repeatedly atomic scientists found new quantum numbers with corresponding rules.
The dualism of matter and field, of particles and waves, was productive as long as its components were studied separately. Problems arose when scientists started to work at the interaction between matter and field. The first problem concerned the specific emission and absorption of light restricted to spectral lines, characteristic for chemical elements and their compounds. Niels Bohr tentatively solved this problem in 1913. The spectral lines correspond to transitions between stationary energy states. The second question was under which circumstances light can be in equilibrium with matter, for instance in an oven. This concerns the shape of the continuous spectrum of black radiation. After a half century of laborious experimental and theoretical work, this problem led to Max Planck’s theory (1900) and Albert Einstein’s photon hypothesis (1905). According to Planck, the interaction between matter and light of frequency f is in need of the exchange of energy packets of E = hf (h being Planck’s constant). Einstein suggested that light itself consists of quanta of energy. Later he added that these quanta have linear momentum as well, proportional to the wave number (s), p=E/c=hs=h/λ. The relation between energy and frequency (E=hf), applied by Bohr in his atomic theory of 1913, was experimentally confirmed by Robert Millikan in 1916, and the relation between momentum and wave number in 1922 by Arthur Compton. The particle character of electromagnetic radiation is easiest to demonstrate with high-energetic photons in gamma- or X-rays. The wave character is easiest proven with low-energetic radiation, with radio or microwaves.
Until 1920, Planck and Einstein did not have many adherents to their views. As late as 1924, Niels Bohr, Hendrik Kramers and John Slater published a theory of electromagnetic radiation, fighting the photon hypothesis at all cost.[18] They went as far as abandoning the laws of conservation of energy and momentum at the atomic level. That was after the publication of Arthur Compton’s effect, describing the collision of a gamma-particle with an electron conserving energy and momentum. Within a year, experiments by Walther Bothe and Hans Geiger proved the ‘BKS-theory’ to be wrong. In 1924 Satyendra Bose and Albert Einstein derived Max Planck’s law from the assumption that electromagnetic radiation in a cavity behaves like an ideal gas consisting of photons.
In 1923, Louis de Broglie published a mathematical paper about the wave-particle character of light. [19] Applying the theory of relativity, he predicted that electrons too would have a wave character. The motion of a particle or energy quantum does not correspond to a single monochromatic wave but to a group of waves, a wave packet. The speed of a particle cannot be related to the wave velocity (λ/T), being larger than the speed of light for a material particle. Instead, the particle speed corresponds to the speed of the wave packet, the group velocity. This is the derivative of frequency with respect to wave number rather than their quotient. Because of the relations of Planck and Einstein, this is the derivative of energy with respect to momentum as well (dE/dp). At most, the group velocity equals the speed of light. (The group velocity dE/dp equals approximately E/p>c and dE/dp<c follow from the relativistic relation between energy and momentum, E=(Eo2+c2p2)1/2, where Eo is the particle’s rest energy. Only if Eo=0, E/p=dE/dp=c. Observe that the word ‘group’ for a wave packet has a different meaning than in the mathematical theory of groups.)
In order to test these suggestions, physicists had to find out whether electrons show interference phenomena. Experiments by Clinton Davisson and Lester Germer in America and by George Thomson in England (1927) proved convincingly the wave character of electrons, thirty years after George’s father Joseph Thomson established the particle character of electrons. As predicted by Louis De Broglie, the linear momentum turned out to be proportional to the wave number. Afterwards the wave character of atoms and nucleons was demonstrated experimentally.
We have seen that it took quite a long time before physicists accepted the particle character of light. Likewise, the wave character of electrons was not accepted immediately, but about 1930 no doubt was left among pre-eminent physicists.
This meant the end of the wave-particle (or matter-field) dualism, implying all phenomena to have either a wave character or a particle character, and the beginning of wave-particle duality being recognized as a universal property of matter. In 1927, Niels Bohr called the wave and particle properties complementary.[20] Bohr’s principle of complementarity presupposes that quantum phenomena only occur at an atomic level, which is refuted in solid state physics. According to Bohr, a measuring system is an indivisible whole, subject to the laws of classical physics, showing either particle or wave phenomena. In different measurement systems, these phenomena would give incompatible results. This view is out of date.
The concept of complementarity is not well-defined. Sometimes, non-commuting operators and the corresponding variables (like position and momentum) are called ‘complementary’ as well, at least if their ‘commutator’ is a number.
An interesting aspect of a wave is that it concerns a movement in motion, a propagating oscillation. Classical mechanics restricted itself to the motion of unchangeable pieces of matter. For macroscopic bodies like billiard balls, bullets, cars and planets, this is a fair approximation, but for microscopic particles it is not. Even in classical physics, the idea of a point-like particle is controversial. Both its mass density and charge density are infinite, and its intrinsic angular momentum cannot be defined.
The experimentally established fact of photons, electrons, and other microsystems having both wave and particle properties does not fit the still popular mechanistic worldview. However, the theory of characters accounts for this fact as follows.
The character of an electron consists of an interlacement of two characters, a generic kinetic wave character and an accompanying specific particle character that is physically qualified. The specific character (different for different physical kinds of particles) determines primarily how electrons interact with other physical subjects, and secondarily which magnitudes play a role in this interaction. These characteristics distinguish the electron from other particles, like protons and atoms being spatially founded, and like photons having a kinetic foundation (5.2-5.4).
Interlaced with the specific character is a generic pattern of motion having the kinetic character of a wave packet. Electrons share this generic character with all other particles. In experiments demonstrating the wave character, there is little difference between electrons, protons, neutrons, or photons. The generic wave character has primarily a kinetic qualification and secondarily a spatial foundation (4.2). The specific physical character determines the boundary conditions and the actual shape of the wave packet. Its wavelength is proportional to its linear momentum, its frequency to its energy. A free electron’s wave packet looks different from that of an electron bound in a hydrogen atom.
The wave character representing the electron’s motion has a tertiary characteristic as well, anticipating physical interaction. The wave function describing the composition of the wave packet determines the probability of the electron’s performance as a particle in any kind of interaction.
A purely periodic wave is infinitely extended in both space and time. It is unfit to give an adequate description of a moving particle, being localized in space and time. A packet of waves having various amplitudes, frequencies, wavelengths, and phases delivers a pattern that is more or less localized. The waves are superposed such that the net amplitude is zero almost everywhere in space and time. Only in a relatively small interval (to be indicated by Δ) the net amplitude differs from zero.
Let us restrict the discussion to rectilinear motion of a wave packet at constant speed. Now the motion is described by four magnitudes. These are the position (x) of the packet at a certain instant of time (t), the wave number (s) and the frequency (f).
The packet is an aggregate of waves with frequencies varying within an interval Δf and wave numbers varying within an interval Δs. Generally, it is provable that the wave packet in the direction of motion has a minimum dimension Δx such that Δx.Δs>1. In order to pass a certain point, the packet needs a time Δt, for which Δt.Δf>1. If we want to compress the packet (Δx and Δt small), the packet consists of a wide spectrum of waves (Δs and Δf large). Conversely, a packet with a well defined frequency (Δs and Δf small) is extended in time and space (Δx and Δt large). It is impossible to produce a wave packet whose frequency (or wave number) has a precise value, and whose dimension is point-like simultaneously. If we make the variation Δs small, the length of the wave packet Δx is large. Or we try to localize the packet, but then the wave number shows a large variation.
Sometimes a wave packet is longer than one might believe. A photon emitted by an atom has a dimension of Δx=cΔt, Δt being equal to the mean duration of the atom’s metastable state before the emission. Because Δt is of the order of 10-8 sec and c=3*108 m/sec, the photon’s ‘coherence length’ in the direction of motion is several metres. This is confirmed by interference experiments, in which the photon is split into two parts, to be reunited after the parts have transversed different paths. If the path difference is less than a few metres, interference will occur, but this is not the case if the path difference is much longer. The coherence length of photons in a laser ray is many kilometres long, because in a laser, Δt has been made artificially long.
An oscillating system emits or absorbs a wave packet as a whole.During its motion, the coherence of the composing waves is not always spatial. A wave packet can split itself without losing its kinetic coherence. This coherence is expressed by phase relations, as can be demonstrated in interference experiments as described above. In general, two different wave packets do not interfere in this way, because their phases are not correlated. This means that a wave packet maintains its kinetic identity during its motion. The physical unity of the particle comes to the fore when it is involved in some kind of interaction, for instance if it is absorbed by an atom causing a black spot on a photographic plate or a pulse in a counter tube named after Hans Geiger and Walther Müller. Emission and absorption are physically qualified events, in which an electron or a photon acts as an indivisible whole.
The identification of a particle with a wave packet seems to be problematic for various reasons. The first problem, the possible splitting and absorption of a wave packet, is mentioned above.
Second, the wave packet of a freely moving particle always expands, because of the composing waves having different velocities. (Light in vacuum is an exception.) Even if the wave packet is initially well localized, gradually it is smeared out over an increasing part of space and time. However, the assumption that the wave function satisfies a linear wave equation is a simplification of reality. Wave motion can be non-linearly represented by a ‘soliton’ that does not expand. Unfortunately, a non-linear wave equation is mathematically more difficult to treat than a linear one.
Third, in 1926 Werner Heisenberg observed that the wave packet is subject to a law known as indeterminacy relation, uncertainty relation or Heisenberg relation. As a matter of fact, there is as little agreement about its definition as about its name.
Combining the relations Δx.Δs>1 and Δtf>1 with those of Max Planck (E=hf) and Albert Einstein (p=hs) leads to Heisenberg’s relations for a wave packet: Δxp>h and ΔtE>h. (The values of ‘1’ respectively ‘h’ in the mentioned relations indicate an order of magnitude. Sometimes other values are given).[21] The meaning of Δx etc. is given above. In particular, Δt is the time the wave packet needs to pass a certain point. If Δxstf=1, the wave packet’s speed vxtfs is approximately the group velocity df/ds, according to Louis De Broglie. This interpretation is the oldest one, for the indeterminacy relations – without Planck’s constant - were applied in communication theory (where Δf is the band width) long before the birth of quantum mechanics.[22] It is interesting to observe that the indeterminacy relations are not characteristic for quantum mechanics, but for wave motion. The relations are an unavoidable consequence of the wave character of particles and of signals. I shall discuss some alternative interpretations, in particular paying attention to Heisenberg’s relation between energy and time.[23]
Quantum mechanics connects any variable magnitude with a Hermitean operator having eigenfunctions and eigenvalues (2.3). The eigenvalues are the possible values for the magnitude in the system concerned. In a measurement, the scalar product of the system’s state function with an eigenfunction of the operator is the square of the probability that the corresponding eigenvalue will be realized.
If two operators act successively on a function, the result may depend on their order. Heisenberg’s relation Δxp > h can be derived as a property of the non-commuting operators for position and linear momentum. In fact, each pair of non-commuting operators gives rise to a similar relation. This applies, e.g., to each pair out of the three components of angular momentum. Consequently, only one component of an electron’s magnetic moment (usually along a magnetic field) can be measured. The other two components are undetermined, as if the electron exerts a precessional motion about the direction of the magnetic field.
Remarkably, there is no operator for kinetic time. Therefore, some people deny the existence of a Heisenberg relation for time and energy.[24] On the other hand, the operator for energy, called Hamilton-operator or Hamiltonian after William Hamilton, is very important. Its eigenvalues are the energy levels characteristic for e.g. an atom or a molecule. Each operator commuting with the Hamiltonian represents a ‘constant of the motion’ subject to a conservation law.
From the wave function, the probability to find a particle in a certain state can be calculated. Now the indeterminacy is a measure of the mean standard deviation, the statistical inaccuracy of a probability calculation. The indeterminacy of time can be interpreted as the mean lifetime of a metastable state. If the lifetime is large (and the state is relatively stable), the energy of the state is well defined. The rest energy of a short living particle is only determined within the margin given by the Heisenberg relation for time and energy.
This interpretation is needed to understand why an atom is able to absorb a light quantum emitted by another atom in similar circumstances. Because the photon carries linear momentum, both atoms get momentum and kinetic energy. The photon’s energy would fall short to excite the second atom. Usually this shortage is smaller than the uncertainty in the energy levels concerned. However, this is not always the case for atomic nuclei. Unless the two nuclei are moving towards each other, the process of emission followed by absorption would be impossible. Rudolf Mössbauer discovered this consequence of Heisenberg’s relations in 1958. Since then, Mössbauer’s effect became an effective instrument for investigating nuclear energy levels.
The position of a wave packet is measurable within a margin of Δx and its linear momentum within a margin of Δp. Both are as small as experimental circumstances permit, but their product has a minimum value determined by Heisenberg’s relation. The accuracy of the measurement of position restricts that of momentum.
Initially the indeterminacy was interpreted as an effect of the measurement disturbing the system. The measurement of one magnitude disturbs the system such that another magnitude cannot be measured with an unlimited accuracy. Heisenberg explained this by imagining a microscope exploiting light to determine the position and the momentum of an electron.[25] Later, this has appeared to be an unfortunate view. It seems better to consider Heisenberg’s relations to be the cause of the limited accuracy of measurement, rather than to be its effect.
The Heisenberg relation for energy and time has a comparable consequence for the measurement of energy. If a measurement has duration Δt, its accuracy cannot be better than ΔE>ht.
In quantum mechanics, the law of conservation of energy achieves a slightly different form. According to the classical formulation, the energy of a closed system is constant. In this statement, time does not occur explicitly. The system is assumed to be isolated for an indefinite time, and that is questionable. Heisenberg’s relation suggests a new formulation. For a system isolated during a time interval Δt, the energy is constant within a margin of ΔEht. Within this margin, the system shows spontaneous energy fluctuations, only relevant if Δt is very small. In fact, the value of ΔE is less significant than the relative indeterminacy ΔE/E. For a macroscopic system the energy E is so much larger than ΔE that the energy fluctuations can be neglected, and the law of conservation of energy remains valid.
According to quantum field theory, a physical vacuum is not an empty space. Spontaneous fluctuations may occur. A fluctuation leads to the creation and annihilation of a virtual photon or a virtual pair consisting of a particle and an antiparticle, having an energy of ΔE, within the interval Δt<hE. Meanwhile the virtual particle or pair is able to exert an interaction, e.g. a collision between two real particles. (Such virtual processes are depicted in the diagrams called after Richard Feynman.) Virtual particles are not directly observable but play a part in several real processes.
The amplitude of waves in water, sound, and light corresponds to a measurable physical real magnitude. In water this is the height of its surface, in sound the pressure of air, in light the electromagnetic field strength. The energy of the wave is proportional to the square of the amplitude. This interpretation is not applicable to the waves for material particles like electrons. In this case the wave has a less concrete character, it has no direct physical meaning. Even in mathematical terms, the wave is not real, for the wave function has a complex value.
In 1926, Max Born offered a new interpretation, since then commonly accepted.[26] He stated that a wave function (real or complex) is a probability function. In a footnote added in proof, Born observed that the probability is proportional to the square of the wave function.
The wave function we are talking about is prepared at an earlier interaction, for instance, the emission of the particle. It changes during its motion, and one of its possibilities is realized at the next interaction, like the particle’s absorption. The wave function expresses the transition probability between the initial and the final state.[27]
This probability may concern any measurable property that is variable. Hence, it does not concern natural constants like the speed of light or the charge of the electron. According to Born, the probability interpretation bridges the apparently incompatible wave and particle aspects: ‘The true philosophical import of the statistical interpretation consists in the recognition that the wave-picture and the corpuscle-picture are not mutually exclusive, but are two complementary ways of considering the same process’.[28] Wave properties determine the probability of position, momentum, etc., traditionally considered properties of particles.
Classical mechanics used statistics as a mathematical means, assuming that the particles behave deterministic in principle. In 1926, Born’s probability interpretation put a definitive end to mechanist determinism, having lost its credibility before because of radioactivity. Waves and wave motion are still determined, e.g. by Schrödinger’s equation, even if no experimental method exists to determine the phase of a wave. However, the wave function determines only the probability of future interactions. The fact that quantum physics is a stochastic theory has evoked widely differing reactions. Albert Einstein considered the theory incomplete. Max Born stressed that at least waves behave deterministically, only its interpretation having a statistical character. Niels Bohr accepted a fundamental stochastic element in his world-view. In quantum mechanics, the particles themselves behave stochastically.
Even more strange is that chance is subject to interference. In the traditional probability calculus (2.4) probabilities can be added or multiplied. Nobody ever imagined that probabilities could interfere. Interference of waves may result in an increase of probability, but to a decrease as well, even to the extinction of probability. Hence, besides a probability interpretation of waves, we have a wave interpretation of probability.[29]
Outside quantum mechanics, this is still unheard of, not only in daily life and the humanities, but in sciences like biology and ethology as well. The reason is that interference of probabilities only occurs as long as there is no physical interaction by which a chance realizes itself. Observe that an interference-experiment aims at demonstrating interference. This is only possible if the interference of waves is followed by an interaction of the particles concerned with, e.g., a screen. The absence of physical interaction is an exceptional situation. It only occurs if the system concerned has no internal interactions (or if these are frozen), as long as it moves freely. In macroscopic bodies, interactions occur continuously and interference of probabilities does not occur. Therefore, the phenomenon of interference of chances is unknown outside quantum physics.
The concept of probability or chance anticipates the physical relation frame, because only by means of a physical interaction a chance can be realized. An open-minded spectator observes an asymmetry in time. Probability always concerns future events. It draws a boundary line between a possibility in the present and a realization in the future. For this realization, a physical interaction is needed. The wave equation and the wave function describe probabilities, not their realization. The wave packet anticipates a physical interaction leading to the realization of a chance, but is itself a kinetic subject, not a physical subject. If the particle realizes one of its possibilities, it simultaneously destroys all alternative possibilities. In that respect, there is no difference between quantum mechanics and classical theories of probability.
As long as the position of an electron is not determined, its wave packet is extended in space and time. As soon as an atom absorbs the electron at a certain position, the probability to be elsewhere collapses to zero. Theoretically, this means the projection of a state vector on one of the eigenvectors of Hilbert space, representing all possible states of the system. ‘No other permanent or transient principle of physics has ever given rise to so many comments, criticisms, pleadings, deep remarks, and plain nonsense as the wave function collapse.’[30] In particular, the assumptions that probability is an expression of our limited knowledge of a system and that the observer causes the reduction of the wave packet, have led to a number of subjectivist and solipsist interpretations of quantum physics and related problems, of which I shall only briefly discuss that of Schrödinger’s cat. This so-called reduction of the wave packet requires a velocity far exceeding the speed of light. However, this reduction concerns the wave character, not the physical character of the particle. It does not counter the physical law that no material particle can move faster than light.
Likewise, Schrödinger’s equation describes the states of an atom or molecule and the transition probabilities between states. It does not account for the actual transition from a state to an eigenstate, when the system experiences a measurement or another kind of interaction. According to Nancy Cartwright, ‘This transition therefore does not belong to elementary quantum dynamics. But it is meant to express a physical interaction between the measured object and the measuring apparatus, which one would expect to be a direct consequence of dynamics’.[31] ‘Von Neumann claimed that the reduction of the wave packet occurs when a measurement is made. But it also occurs when a quantum system is prepared in an eigenstate, when one particle scatters from another, when a radioactive nucleus disintegrates, and in a large number of other transition processes as well … There is nothing peculiar about measurement, and there is no special role for consciousness in quantum mechanics.’[32] But contrary to Cartwright stating: ‘… there are not two different kinds of evolution in quantum mechanics. There are evolutions that are correctly described by Schrödinger’s equation, and there are evolutions that are correctly described by something like von Neumann’s projection postulate. But these are not different kinds in any physically relevant sense’,[33] I believe that there is a difference. The first concerns a reversible motion, the second an irreversible physical process: ‘Indeterministically and irreversibly, without the intervention of any external observer, a system can change its state … When such a situation occurs, the probabilities for these transitions can be computed; it is these probabilities that serve to interpret quantum mechanics.’[34]
Is the problem of the reduction of the wave packet relevant for macroscopic bodies as well? Historically, this question is concentrated in the problem of Edwin Schrödinger’s cat, hypothetically locked up alive in a non-transparent case. A mechanism releases a mortal poison at an unpredictable instant, for instance controlled by a radioactive process. As long as the case is not opened, one may wonder whether the cat is still alive. If quantum mechanics is applied consequently, the state of the cat is a mixture, a superposition of two eigenstates, dead and alive, respectively.
The principle of decoherence, developed at the end of the twentieth century, may provide a satisfactory answer. For a macroscopic body, a state being a combination of eigenstates will spontaneously change very fast into an eigenstate, because of the many interactions taking place in the macroscopic system itself. This solves the problem of Schrödinger’s cat, for each superposition of dead and alive transforms itself almost immediately into a state of dead or alive. The principle of decoherence is in some cases provable, though it is not proved generally.[35] Decoherence even occurs in quite small molecules.[36] There are exceptions too, in systems without much internal energy dissipation, e.g. electromagnetic radiation in a transparent medium and superconductors (5.4)[37].
The principle of decoherence is part of a realistic interpretation of quantum physics. It does not idealize the ‘reduction of the wave packet’ to a projection in an abstract state space. It takes into account the character of the macroscopic system in which a possible state is realized by means of a physical interaction.
The so-called measurement problem constitutes the nucleus of what is usually called the interpretation of quantum mechanics. ‘The interpretive challenge of quantum theory is often presented in terms of the measurement problem: i.e., that the formalism itself does not specify that only one outcome happens, nor does it explain why or how that particular outcome happens. This is the context in which it is often asserted that the theory is incomplete and is therefore in need of alteration in some way.’[38] It is foremost a philosophical problem, not a physical one, which is remarkable, because measurement is part of experimental physics, and the starting point of theoretical physics. After the development of quantum physics, both experimental and theoretical physicists have investigated the relevance of symmetry, and the structure of atoms and molecules, solids and stars, and subatomic structures like nuclei and elementary particles. Apparently, this has escaped the attention of many philosophers, who are still discussing the consequences of Heisenberg’s indeterminacy relations.
Encyclopaedia of relations and characters. 4. Periodic motion
4.4. Symmetric and antisymmetric
wave functions
The concept of probability is applicable to a single particle as well as to a homogeneous set of similar particles, a gas consisting of molecules, electrons or photons. In order to study such systems, since circa 1860 statistical physics has developed various mathematical methods. A distribution function points out how the energy is distributed over the particles, how many particles have a certain energy value, and how the average energy depends on temperature. In any distribution function, the temperature is an important equilibrium parameter.
Classical physics assigned each particle its own state, but in quantum physics, this would lead to wrong results. It is better to design the possible states, and to calculate how many particles occupy a given state, without questioning which particle occupies which state. It turns out that there are two entirely different cases, referring to ‘bosons’, respectively ‘fermions’.[39]
In the first case, the occupation number of particles in a well-defined state is unlimited. Bosons like photons are subject to a distribution function in 1924 derived by Satyendra Bose and published by Albert Einstein, hence called Bose-Einstein statistics. Bosons have an integral spin, the occupation number of each state may vary from zero to infinity. An integral spin means that the intrinsic angular momentum is an integer times Planck’s reduced constant, 0, h/2π, 2h/2π, etc. A half-integral spin means that the intrinsic angular moment has values like (1/2)h/2π, (3/2)h/2π. I shall not discuss the connection of integral spin with bosons and half-integral spin with fermions.
In the other case, each well-defined state is occupied by at most one particle, according to Wolfgang Pauli’s exclusion principle. The presence of a particle in a given state excludes the presence of another similar particle in the same state. Fermions like electrons, protons, and neutrons have a half-integral spin. They are subject to the distribution function that Enrico Fermi and Paul Dirac derived in 1926.
In both cases, the distribution approximates the classical Maxwell-Boltzmann distribution function, if the mean occupation of available states is much smaller than 1. This applies to molecules in a classical gas (2.4).
The distinction of fermions and bosons rests on permutation symmetry. In a finite set the elements can be ordered into a sequence and numbered using the natural numbers as indices. For n elements, this can be done in n!=…n different ways. The n! permutations are symmetric if the elements are indistinguishable. Permutation symmetry is not spatial but quantitative.
In a system consisting of a number of similar particles, the state of the aggregate can be decomposed into a product of separate states for each particle apart. (It is by no means obvious that the state function of an electron or photon gas can be written as a product (or rather a sum of products) of state functions for each particle apart, but it turns out to be a quite close approximation.) A permutation of the order of similar particles should not have consequences for the state of the aggregate as a whole. However, in quantum physics only the square of a state is relevant to probability calculations. Hence, exchanging two particles allows of two possibilities: either the state is multiplied by +1 and does not change, or it is multiplied by –1. In both cases, a repetition of the exchange produces the original state. In the first case, the state is called symmetric with respect to a permutation, in the second case antisymmetric.
In the antisymmetric case, if two particles would occupy the same state an exchange would simultaneously result in multiplying the state by +1 (because nothing changes) and by –1 (because of antisymmetry), leading to a contradiction. Therefore, two particles cannot simultaneously occupy the same state. This is Wolfgang Pauli’s exclusion principle concerning fermions. No comparable principle applies to bosons, having symmetric wave functions with respect to permutation.
Both a distribution function like the Fermi-Dirac statistics and Pauli’s exclusion principle are only applicable to a homogeneous aggregate of similar particles. In a heterogeneous aggregate like a nucleus, they must be applied to the protons and neutrons separately.
The distinction of fermions and bosons, and the exclusion principle for fermions, have a fundamental significance for the understanding of the characters of material things containing several similar particles. To a large extent, it explains the orbital structure of atoms and the composition of nuclei from protons and neutrons.
When predicting the wave character of electrons, Louis de Broglie suggested that the stability of the electronic orbit in a hydrogen atom is explainable by assuming that the electron moves around the nucleus as a standing wave. This implies that the circumference of the orbit is an integral number times the wavelength. From the classical theory of circular motion, he derived that the orbital angular momentum should be an integral number times Max Planck’s reduced constant (h/2π). This is precisely the quantum condition applied by Niels Bohr in 1913 in his first atomic theory. For a uniform circular motion with radius r, the angular momentum L=rp. The linear momentum p = h/l (l for lambda, the wave length) according to Einstein. If the circumference 2πr = nl, n being a positive integer, then L=nlp/2π=nh/2π. Quantum mechanics allows of the value L=0 for orbital angular momentum. This has no analogy as a standing wave on the circumference of a circle.
The atomic physicists at Copenhagen, Göttingen, and Munich considered de Broglie’s idea rather absurd, but it received support from Albert Einstein, and it inspired Edwin Schrödinger to develop his wave equation.[40] In a stable system, Schrödinger’s equation is independent of time and its solutions are stationary waves, comparable to the standing waves in a violin string or an organ pipe. Only a limited number of frequencies are possible, corresponding to the energy levels in atoms and molecules. In contrast, a time-dependent Schrödinger equation describes transitions between energy levels, giving rise to the discrete emission and absorption spectra characteristic for atoms and molecules. Although one often speaks of the Schrödinger equation, there are many variants, one for each physical character. Each variant specifies the system’s boundary conditions and expresses the law for the possible motions of the particles concerned.
In the practice of solid-state physics, the exclusion principle is more important than Schödinger’s equation. This can be elucidated by discussing the model of particles confined to a rectangular box. Again, the wave functions look like standing waves.
In a good approximation the valence electrons in a metal or semiconductor are not bound to individual atoms but are free to move around. The mutual repulsive electric force of the electrons compensates for the attraction by the positive ions. The electron’s energy consists almost entirely of kinetic energy, E=p2/2m, if p is its linear momentum and m its mass.
Because the position of the electron is confined to the box, in Heisenberg’s relation Δx equals the length of the box (analogous for y and z). Because Δx is relatively large, Δp is small and the momentum is well defined. Hence the momentum characterizes the state of each electron and the energy states are easy to calculate. In a three-dimensional momentum space a state denoted by the vector p occupies a volume Δp. Momentum space is a three-dimensional diagram for the vector p’s components, px, py and pz. The volume of a state equals ΔppxΔpyΔpz. In the described model, the states are mostly occupied up till the energy value EF, the ‘Fermi-energy’, determining a sphere around the origin of momentum space. Outside the sphere, most states are empty. A relatively thin skin, its thickness being proportional to the temperature, separates the occupied and empty states.
According to the exclusion principle, a low energy state is occupied by two electrons (because there are two possible spin states), whereas high-energy states are empty. In a metal, this leads to a relatively sharp separation of occupied and empty states. The mean kinetic energy of the electrons is almost independent of temperature, and the specific heat is proportional to temperature, strikingly different from other aggregates of particles.
Mechanical oscillations or sound waves in a solid form wave packets. These bosons are called phonons or sound particles. Bose-Einstein statistics leads to Peter Debije’s law for the specific heat of a solid. At a moderate temperature the specific heat is proportional to the third power of temperature. Except for very low temperatures, the electrons contribute far less to the specific heat of a solid than the phonons do. The number of electrons is independent of temperature, whereas the number of phonons in a solid or photons in an oven strongly depends on temperature.
A similar situation applies to an oven, in which electromagnetic radiation is in thermal equilibrium. According to Planck’s law of radiation, the energy of this boson gas is proportional to the fourth power of temperature. For a gas satisfying the Maxwell-Boltzmann distribution, the energy is proportional to temperature. Some people who got stuck in classical mechanics define temperature as a measure of the mean energy of molecules. Which meaning such a definition should have for a fermion gas or boson gas is unclear.
Hence, the difference between fermion and boson aggregates comes quite dramatically to the fore in the temperature dependence of their energy. Amazingly, the physical character of the electrons, phonons, and photons plays a subordinate part compared to their kinetic character. Largely, the symmetry of the wave function determines the properties of an aggregate. Consequently, a neutron star has much in common with an electron gas in a metal.
The existence of antiparticles is a consequence of a symmetry of the relativistic wave equation. The quantum mechanics of Erwin Schrödinger and Werner Heisenberg in 1926 was not relativistic, but about 1927 Paul Dirac found a relativistic formulation.[41] From his equation follows the electron’s half-integral angular momentum, not as a spinning motion as conceived by its discoverers, Samual Goudsmit and George Uhlenbeck, but as a symmetry property (still called spin).
Dirac’s wave equation had an unexpected result, to wit the existence of negative energy eigenvalues for free electrons. According to relativity theory, the energy E and momentum p for a freely moving particle with rest energy Eo=moc2 are related by the formula: E2=Eo2+(cp)2. For a given value of the linear momentum p, this equation has both positive and negative solutions for the energy E. De positive values are minimally equal to the rest energy Eo and the negative values are maximally -Eo. This leaves a gap of twice the rest energy, about 1 MeV for an electron, much more than the energy of visible light, being about 5 eV per photon. Classical physics could ignore negative solutions, but this is not allowed in quantum physics. Even if the energy difference between positive and negative energy levels is large, the transition probability is not zero. In fact, each electron should spontaneously jump to a negative energy level, releasing a gamma particle having an energy of at least 1 MeV.
Dirac took recourse to Pauli’s exclusion principle. By assuming all negative energy levels to be occupied, he could explain why these are unobserved most of the time, and why many electrons have positive energy values. An electron in one of the highest negative energy levels may jump to one of the lowest positive levels, absorbing a gamma particle having an energy of at least 1 MeV. The reverse, a jump downwards, is only possible if in the nether world of negative energy levels, at least one level is unoccupied. Influenced by an electric or magnetic field, such a hole moves as if it were a positively charged particle. Initially, Dirac assumed protons to correspond to these holes, but it soon became clear that the rest mass of a hole should be the same as that of an electron.
After Carl Anderson in 1932 discovered the positron, a positively charged particle having the electron’s rest mass, this particle was identified with a hole in Dirac’s nether world. This identification took some time.[42] The assumption of the existence of a positive electron besides the negative one was in 1928 much more difficult to accept than in 1932. In 1928, physics acknowledged only three elementary particles, the electron, the proton and the photon. In 1930, the existence of the neutrino was postulated and in 1932, Chadwick discovered the neutron. The completely occupied nether world of electrons is as inert as the nineteenth century ether. It neither moves nor interacts with any other system. That is why we do not observe it. For those who find this difficult to accept, alternative theories are available explaining the existence of antiparticles. Experiments pointed out that an electron is able to annihilate a positron, releasing at least two gamma particles. In the inertial system in which the centre of mass for the electron-positron pair is at rest, their total momentum is zero. Because of the law of conservation of momentum, the annihilation causes the emergence of at least two photons, having opposite momentum.
Meanwhile it is established that besides electrons all particles, bosons included, have antiparticles. Only a photon is identical to its antiparticle. The existence of antiparticles rests on several universally valid laws of symmetry. A particle and its antiparticle have the same mean lifetime, rest energy and spin, but opposite values for charge, baryon number, or lepton number (5.2).
However, if the antiparticles are symmetrical to particles, why are there so few? (Or why is Dirac’s nether world nearly completely occupied?) Probably, this problem can only be solved within the framework of a theory about the early development of the cosmos.
The image of an infinite set of unobservable electrons having negative energy, strongly defeats common sense. However, it received unsolicited support from the so-called band theory in solid-state physics, being a refinement of the earlier discussed free-electron model. The influence of the ions is not completely compensated for by the electrons. An electric field remains having the same periodic structure as the crystal. Taking this field into account, Rudolf Peierls developed the band model. It explains various properties of solids quite well, both quantitatively and qualitatively.
A band is a set of neighbouring energy levels separated from other bands by an energy gap. (A band is comparable to an atomic shell but has a larger bandwidth.) It may be fully or partly occupied by electrons, or it is unoccupied. Both full and empty bands are physically inert. In a metal, at least one band is partly occupied, partly unoccupied by electrons. An isolator has only full (i.e., entirely occupied) bands besides empty bands. The same applies to semiconductors, but now a full band is separated from an empty band by a relatively small gap. According to Peierls in 1929, if energy is added in the form of heat or light (a phonon or a photon), an electron jumps from the lower band to the higher one, leaving a hole behind. This hole behaves like a positively charged particle. In many respects, an electron-hole pair in a semiconductor looks like an electron-positron pair. Only the energy needed for its formation is about a million times smaller. Dirac and Heisenberg corresponded with each other about both theories, initially without observing the analogy.[43]
Another important difference should be mentioned. The set of electron states in Dirac’s theory is an ensemble. In the class of possibilities independent of time and space, half is mostly occupied, the other half is mostly empty. There is only one nether world of negative energy values. In contrast, the set of electrons in a semiconductor is a spatially and temporally restricted collection of electrons, in which some electron states are occupied, others unoccupied. There are as many of these collections as there are semiconductors. To be sure, Peierls was interested in an ensemble as well. In his case, this is the ensemble of all semiconductors of a certain kind. This may be copper oxide, the standard example of a semiconductor in his days, or silicon, the base material of modern chips. But this only confirms the distinction from Dirac’s ensemble of electrons.
Common sense did not turn out to be a reliable guide in the investigation of characters. At the end of the nineteenth century, classical mechanics was considered the paradigm of science. Yet, even then is was clear that daily experience was in the way of the development of electromagnetism, for instance. The many models of the ether were more an inconvenience than a stimulus for research.
When relativity theory and quantum physics unsettled classical mechanics, this led to uncertainty about the reliability of science. At first, the oncoming panic was warded off by the reassuring thought that the new theories were only valid in extreme situations. These situations were, for example, a very high speed, a total eclipse, or a microscopic size. However, astronomy cannot cope without relativity theory, and chemistry fully depends on quantum physics. All macroscopic properties and phenomena of solid-state physics can only be explained in the framework of quantum physics.
Largely, daily experience rests on habituation. In hindsight, it is easy to show that classical mechanics collided with common sense in its starting phase with respect to the law of inertia. Action at a distance in Isaac Newton’s Principia evoked the abhorrence of his contemporaries, but the nineteenth-century public did not experience any trouble with this concept. In the past, mathematical discoveries would cause heated discussions, but the rationality of irrational numbers or the reality of non-Euclidean spaces is now accepted almost as a matter of course.
This does not mean that common sense is always wrong in scientific affairs. The irreversibility of physical processes is part of daily experience. In the framework of the mechanist worldview of the nineteenth century, physicists and philosophers have stubbornly but in vain tried to reduce irreversible processes to reversible motion, and to save determinism. This is also discernible in attempts to find (mostly mathematical) interpretations of quantum mechanics that allow of temporal reversibility and of determinism, such as the so-called many-worlds interpretation, and the transaction interpretation.
Since the twentieth century, mathematics, science and technology dominate our society to such an extent, that new developments are easier to integrate in our daily experience than before. Science has taught common sense to accept that the characters of natural things and events are neither manifest nor evident. The hidden properties of matter and of living beings brought to light by the sciences are applicable in a technology that is accessible for anyone but understood by few. This technology has led to an unprecedented prosperity. Our daily experience adapts itself easily and eagerly to this development.
[1] Lucas 1973, 29.
[2] Stafleu 1987, 61; 2019, 2.1.
[3] Reichenbach 1957, 116-119; Grünbaum 1968, 19, 70; 1973, 22; Stafleu 2019, 4.4-4.5.
[4] Grünbaum 1973, 22-23.
[5] Newton 1687, 13.
[6] Margenau 1950, 139.
[7] Maxwell 1877, 29; Cassirer 1921, 364.
[8] Mach 1883, 217.
[9] Reichenbach 1957, 117.
[10] Reichenbach 1957, 118.
[11] Carnap 1966, chapter 8.
[12] Huygens 1690, 15; Sabra 1967, 212; Stafleu 2019, 3.2.
[13] Newton 1704, 278-282; Sabra 1967, chapter 13.
[14] Achinstein 1991, 24.
[15] Hanson 1963, 13; Jammer 1966, 31.
[16] Klein 1964, Pais 1982, part IV.
[17] Pais 1991, 150.
[18] Bohr, Kramers, Slater 1924; cp. Slater 1975, 11; Pais 1982, chapter 22; 1991, 232-239.
[19] Darrigol 1986.
[20] Bohr 1934, chapter 2; Bohr 1949; Meyer-Abich 1965; Jammer 1966, chapter 7; 1974, chapter 4; Pais 1991, 309-316, 425-436.
[21] Messiah 1961, 133.
[22] Bunge 1967a, 265.
[23] Margenau 1950, chapter 18; Messiah 1961, 129-149; Jammer 1966, chapter 7; Jammer 1974, chapter 3; Omnès 1994, chapter 2.
[24] Bunge 1967a, 248, 267.
[25] Heisenberg 1930, 21-23.
[26] Jammer 1974, 38-44.
[27] Cartwright 1983, 179.
[28] M. Born, Atomic physics, Blackie 1944, quoted by Bastin (ed.) 1971, 5.
[29] Heisenberg 1958, 25.
[30] Omnès 1994, 509.
[31] Omnès 1994, 84:
[32] Cartwright 1983, 195.
[33] Cartwright 1983, 179.
[34] Cartwright 1983, 179.
[35] Omnès 1994, chapter 7, 484-488; Torretti 1999, 364-367.
[36] Omnès 1994, 299-302.
[37] Omnès 1994, 269.
[38] Kastner 2013, 202:
[39] Jammer 1966, 338-345.
[40] Klein 1964; Raman, Forman 1969.
[41] Kragh 1990, chapter 3, 5.
[42] Hanson 1963, chapter IX.
[43] Kragh 1990, 104-105.
Encyclopaedia of relations and characters,
their evolution and history
Chapter 5
Physical characters
5.1. The unification of physical interactions
5.2. The character of electrons
5.3. The quantum ladder
5.4. Individualized currents
5.5. Aggregates and statistics
5.6. Coming into being, change and decay
Encyclopaedia of relations and characters. 5. Physical characters
5.1. The unification of physical interactions
The aim of chapter 5 is a philosophical analysis of physical characters. Their relevance can hardly be overestimated. The discovery of the electron in 1897 provided the study of the structure of matter with a strong impulse, both in physics and in chemistry. Our knowledge of atoms and molecules, of nuclei and sub-atomic particles, of stars and stellar systems, dates largely from the twentieth century. The significance of electrotechnology and electronics for the present society is overwhelming.
The physical aspect of the cosmos is characterized by interactions between two or more subjects. Interaction is a relation different from the quantitative, spatial, or kinetic relations, on which it can be projected. It is subject to natural laws. Some laws are specific, like the electromagnetic ones, determining characters of physical kinds. Some laws are general, like the laws of thermodynamics and the laws of conservation of energy, linear and angular momentum. The general laws constitute the physical-chemical relation frame. Both for the generic and the specific laws, physics has reached a high level of unification.
Because of their relevance to study types of characters, this chapter starts with an analysis of the projections of the physical relation frame onto the three preceding ones (5.1). Next, I investigate the characters of physically stable things, consecutively quantitatively, spatially, and kinetically founded (5.2-5.4). Section 5.5 surveys aggregates and statistics. Finally, section 5.6 reviews processes of coming into being, change, and decay.
The existence of physically qualified things and events implies their interaction, the universal physical relation. If something could not interact with anything else it would be inert. It would not exist in a physical sense, and it would have no physical place in the cosmos. Groups, spatial figures, waves and oscillations do not interact, hence are not physical unless interlaced with physical characters. The noble gases are called inert because they hardly ever take part in chemical compounds, yet their atoms are able to collide with each other. The most inert things among subatomic particles are the neutrino’s, capable of flying through the earth with a very small probability of colliding with a nucleus or an electron. Nevertheless, neutrinos are detectable and have been detected.
Wolfgang Pauli postulated the existence of neutrinos in 1930 in order to explain the phenomenon of β-radioactivity. Neutrino’s were not detected experimentally before 1956. According to a physical criterion, neutrino’s exist if they demonstrably interact with other particles. Sometimes it is said that the neutrino was ‘observed’ for the first time in 1956. Therefore one has to stretch the concept of ‘observation’ quite far. In no experiment neutrino’s can be seen, heard, smelled, tasted, or felt. Even their path of motion cannot be made visible in any experiment. But in several kinds of experiment, from observable phenomena the energy and momentum (both magnitude and direction) of individual neutrino’s can be calculated. For a physicist, this provides sufficient proof for their physical existence as interacting particles.
The universality of the relation frames allows science of comparing characters with each other and to determine their specific relations. The projections of the physical relation frame onto the preceding frames allow us to measure these relations. Measurability is the base of the mathematization of the exact sciences. It allows of applying statistics and designing mathematical models for natural and artificial systems.
The simplest case of interaction concerns two isolated systems interacting only with each other. Thermodynamics characterizes an isolated or closed system by magnitudes like energy and entropy. The two systems have thermal, chemical, or electric potential differences, giving rise to currents creating entropy. According to the second law of thermodynamics, this interaction is irreversible. Here ‘system’ is a general expression for a bounded part of space inclusive of the enclosed matter and energy. A closed system does not exchange energy or matter with its environment. Entropy can only be defined properly if the system concerned is in internal equilibrium and isolated from its environment.
In kinematics, an interactive event may have the character of a collision, minimally leading to a change in the state of motion of the colliding subjects. Often, the internal state of the colliding subjects changes as well. Except for the boundary case of an elastic collision, these processes are subject to the physical order of irreversibility. Frictionless motion influenced by a force is the standard example of a reversible interaction. In fact, it is also a boundary case, for any kind of friction or energy dissipation causes motion to be irreversible.
The law of inertia expresses the independence of uniform motion from physical interaction. It confirms the existence of uniform and rectilinear motions having no physical cause. This is an abstraction, for concrete things experiencing forces have a physical aspect as well. In reality a uniform rectilinear motion only occurs if the forces acting on the moving body balance each other.
Kinetic time is symmetric with respect to past and future. If in the description of a motion the time parameter (t) is replaced by its reverse (–t), we achieve a valid description of a possible motion. In the absence of friction or any other kind of energy dissipation, motion is reversible. By distinguishing past and future we are able to discover cause-effect relations, assuming that an effect never precedes its cause. According to relativity theory, the order of events having a causal relation is in all inertial systems the same, provided that time is not reversed.
In our common understanding of time, the discrimination of past and future is a matter of course,[1] but in the philosophy of science it is problematic. The existence of irreversible processes cannot be denied. All motions with friction are irreversible. Apparently, the absorption of light by an atom or a molecule is the reverse of emission, but Albert Einstein demonstrated that the reverse of (stimulated) absorption is stimulated emission of light, making spontaneous emission a third process, having no reverse (5.6). This applies to radioactive processes as well. The phenomenon of decoherence makes most quantum processes irreversible.[2] Only wave motion subject to Edwin Schrödinger’s equation is symmetric in time. Classical mechanics usually expresses interaction by a force between two subjects, this relation being symmetric according to Newton’s third law of motion. However, this law is only applicable to spatially separated subjects if the time needed to establish the interaction is negligible, i.e., if the action at a distance is (almost) instantaneous. Einstein made clear that interaction always needs time, hence even interaction at a distance is asymmetric in time.
Irreversibility does not imply that the reverse process is impossible. It may be less probable, or requiring quite different initial conditions. The transport of heat from a cold to a hotter body (as occurs in a refrigerator) demands different circumstances from the reverse process, which occurs spontaneously if the two bodies are not thermally isolated from each other. A short living point-like source of light causes a flash expanding in space. It is not impossible but practically very difficult to reverse this wave motion, for instance applying a perfect spherical mirror with the light source at the centre. But even in this case, the reversed motion is only possible thanks to the first motion, such that the experiment as a whole is still irreversible.
Yet, irreversibility as a temporal order is philosophically controversial, for it does not fit into the reductionist worldview influenced by nineteenth-century mechanism.[3] This worldview assumes each process to be reducible to motions of as such unchangeable pieces of matter, interacting through Newtonian forces. Ludwig Boltzmann attempted to bridge reversible motion and irreversible processes by means of the concepts of probability and randomness. In order to achieve the intended results, he had to assume that the realization of chances is irreversible. According to Hans Reichenbach, ‘the direction of time is supplied by the direction of entropy, because the latter direction is made manifest in the statistical behaviour of a large number of separate systems, generated individually in the general drive to more and more probable states.’[4] But he also observes: ‘The inference from time to entropy leads to the same result whether it is referred to the following or to preceding events’.[5] One may conclude that ‘… the one great law of irreversibility (the Second Law) cannot be explained from the reversible laws of elementary particle mechanics…’.[6]
It is sometimes stated that all ‘basic’ laws of physics are symmetrical in time. This seems to be true as far as kinetic time is concerned, and if any law that belies temporal symmetry (like the second law of thermodynamics, or the law for spontaneous decay) is not considered ‘basic’. Anyhow, all attempts to reduce irreversibility to the subject side of the physical aspect of reality have failed.
Interaction is first of all subject to general laws independent of the specific character of the things involved. Some conservation laws are derivable from Albert Einstein’s principle of relativity, stating that the laws of physics are independent of the motion of inertial systems.
Being the physical subject-subject relation, interaction may be analysed with the help of quantitative magnitudes like energy, mass, and charge; spatial concepts like force, momentum, field strength, and potential difference; as well as kinetic expressions like currents of heat, matter, or electricity.
Like interaction, energy, force, and current are abstract concepts. Yet these are not merely covering concepts without physical content. They can be specified as projections of characteristic interactions like the electromagnetic one. Electric energy, gravitational force, and the flow of heat specify the abstract concepts of energy, force, and current.
For energy to be measurable, it is relevant that one concrete form of energy is convertible into another one. For instance, a generator transforms mechanical energy into electric energy. Similarly, a concrete force may balance another force, whereas a concrete current accompanies currents of a different kind. This means that characteristically different interactions are comparable, they can be measured with respect to each other. The physical subject-subject relation, the interaction projected as energy, force, and current, is the foundation of the whole system of measuring, characteristic for astronomy, biology, chemistry, physics, as well as technology. The concepts of energy, force, and current enable us to determine physical subject-subject relations objectively.
Measurement of a quantity requires several conditions to be fulfilled. First, a unit should be available. A measurement compares a quantity with an agreed unit. Secondly, a magnitude requires a law, a metric, determining how a magnitude is to be projected on a set of numbers, on a scale. The third requirement, being the availability of a measuring instrument, cannot always be directly satisfied. A magnitude like entropy can only be calculated from measurements of other magnitudes. Fourth, therefore, there must be a fixed relation between the various metrics and units, a metrical system. This allows of the application of measured properties in theories. Unification of units and scales such as the metric system is a necessary requirement for the communication of both measurements and theories.
I shall discuss the concepts of energy, force, and current in some more detail. It is by no means evident that these concepts are the most general projections of interaction. Rather, their development has been a long and tedious process, leading to a general unification of natural science, to be distinguished from a more specific unification to be discussed later on.
a. Since the middle of the nineteenth century, energy is the most important quantitative expression of physical, chemical, and biotic interactions.[7] As such it has superseded mass, in particular since it is known that mass and energy are equivalent, according to physics’ most famous formula, E=mc2. The formula means that mass and energy are equivalent, that each amount of energy corresponds with an amount of mass and conversely. It does not mean that mass is a form of energy, or can be converted into energy, as it is often misunderstood. Energy is specifiable as kinetic and potential energy, thermal energy, nuclear energy, or chemical energy. Affirming the total energy of a closed system to be constant, the law of conservation of energy implies that one kind of energy can be converted into another one, but not mass into energy. For this reason, energy forms a universal base for comparing various types of interaction.
Before energy, mass became a universal measure for the amount of matter, serving as a measure for gravity as well as for the amount of heat that a subject absorbs when heated by one degree. Energy and mass are general expressions of physical interaction. This applies to entropy and related thermodynamic concepts too. In contrast, the rest energy and the rest mass of a particle or an atom are characteristic magnitudes.
Velocity is a measure for motion, but if it concerns physically qualified things, linear momentum (quantity of motion, the product of mass and velocity) turns out to be more significant. The same applies to angular momentum (quantity of rotation, the product of moment of inertia and angular frequency; angular frequency equals 2π times the frequency. The moment of inertia is an expression of the distribution of matter about a body with respect to a rotation axis.) In the absence of external forces, linear and angular momentum are subject to conservation laws. Velocity, linear and angular momentum, and moment of inertia are not expressed by a single number (a scalar) but by vectors or tensors. Relativity theory combines energy (a scalar) with linear momentum (a vector with three components) into a single vector, having four components.
b. According to Isaac Newton’s third law, the mechanical force is a subject-subject relation.[8] If A exerts a force F on B, then B exerts a force –F on A. The minus sign indicates that the two forces being equal in magnitude have opposite directions. The third law has exerted a strong influence on the development of physics during a quite long time. In certain circumstances, the law of conservation of linear momentum can be derived from it. However, nowadays physicists allot higher priority to this conservation law than to Newton’s third law. In order to apply Newton’s laws when more than one force is acting, we have to consider the forces simultaneously. This does not lead to problems in the case of two forces acting on the same body. But the third law is especially important for action at a distance, inherent in the Newtonian formulation of gravity, electricity, and magnetism. In Albert Einstein’s theory of relativity, simultaneity at a distance turns out to depend on the motion of the reference system. The laws of conservation of linear momentum and energy turn out to be easier to amend to relativity theory than Newton’s third law. Now one describes the interaction as an exchange of energy and momentum (mediated by a field particle like a photon). This exchange requires a certain span of time.
Newton’s second law provides the relation between force and momentum: the net force equals the change of momentum per unit of time. The law of inertia seems to be deductible from Newton’s second law. If the force is zero, momentum and hence velocity is constant, or so it is argued. However, if the first law would not be valid, there could be a different law, assuming that each body experiences a frictional force, dependent on speed, in a direction opposite to the velocity. (In its most simple form, F=-bv, b>0.) Accordingly, if the total force on a body is zero, the body would be at rest. A unique reference system would exist in which all bodies on which no forces act would be at rest. This is the nucleus of Aristotle’s mechanics, but it contradicts both the classical principle of relativity and the modern one. The principle of relativity is an alternative expression of the law of inertia, pointing out that absolute (non-relative) uniform motion does not exist. Just like spatial position on the one hand and interaction on the other side, motion is a universal relation.
Besides to a rigid body, a force is applicable to a fluid, usually in the form of a pressure (i.e., force per area). A pressure difference causes a change of volume or a current subject to Daniel Bernoulli’s law, if the fluid is incompressible. Besides, there are non-mechanical forces causing currents. A temperature gradient causes a heat current, chemical potentials drive material flows (e.g., diffusion), and an electric potential difference directs an electric current.
To find a metric for a thermodynamic or an electric potential is not an easy task. On the basis of an analysis of the idealized cycles devised by Sadi Carnot, William Thomson (later Lord Kelvin) established the theoretical metric for the thermodynamic temperature scale.[9] The practical definition of the temperature scale takes this theoretical ‘absolute’ scale as a norm. The definition of the metric of pressure is relatively easy, but finding the metric of electric potential caused almost as much trouble as the development of the thermodynamic temperature scale.
The Newtonian force can sometimes be written as the derivative of a potential energy (i.e., energy as a function of spatial position). Since the beginning of the nineteenth century, the concept of a force is incorporated in the concept of a field. At first a field was considered merely a mathematical device, until James Clerk Maxwell proved the electromagnetic field to have physical reality of its own. A field is a physical function projected on space. Usually one assumes the field to be continuous and differentiable almost everywhere. A field may be constant or variable. There are scalar fields (like the distribution of temperature in a gas), vector fields (like the electrostatic field), and tensor fields (like the electromagnetic field). A field of force is called ‘conservative’ if the forces are derivable from a space-dependent potential energy. This applies to the classical gravitational and electrostatic fields. It does not apply to the force derived by Hendrik Antoon Lorentz, because it depends on the velocity of a charged body with respect to a magnetic field. (The Lorentz force and Maxwell’s equations for the electromagnetic field are derivable from a gauge-invariant vector potential. ‘Gauge-invariance’ is the relativistic successor to the static concept of a conservative field.)
c. A further analysis of thermodynamics and electricity makes clear that current is a third projection, now from the physical onto the kinetic relation frame. The concept of entropy points to a general property of currents. In each current, entropy is created, making the current irreversible. (A current in a superconductor is a boundary case. In a closed superconducting circuit without a source, an electric current may persist indefinitely, whereas a normal current would die out very fast.)
In a system in which currents occur, entropy increases. Only if a system as a whole is in equilibrium, there are no net currents and the entropy is constant. Like several mechanical forces are able to balance each other, so do thermodynamic forces and currents. This leads to mutual relations like thermo-electricity, the phenomenon that a heat current causes an electric current (Thomas Seebeck’s effect) or reverse (Charles Peltier’s effect).[10] This is applied in the thermo-electric thermometer, measuring a temperature difference by an electric potential difference. Relations between various types of currents are subject to a symmetry relation discovered by William Kelvin and generalized by Lars Onsager.[11]
The laws of thermodynamics are generally valid, independent of the specific character of a physical thing or aggregate. For a limited set of specific systems (e.g., a gas consisting of similar molecules), statistical mechanics is able to derive the second law from mechanical interactions, starting from assumptions about their probability.[12] Whereas the thermodynamic law states that the entropy in a closed system is constant or increasing, the statistical law allows of fluctuations. The source of this difference is that thermodynamics supposes matter to be continuous, whereas statistical mechanics takes into account the molecular character of matter.
There are many different interactions, like electricity, magnetism, contact forces (e.g., friction), chemical forces (e.g., glue), or gravity. Some are reducible to others. The contact forces turn out to be of an electromagnetic nature, and chemical forces are reducible to electrical ones.
Besides the general unification discussed above allowing of the comparison of widely differing interactions, a characteristic unification can be discerned. James Clerk Maxwell’s unification of electricity and magnetism implies these interactions to have the same character, being subject to the same specific cluster of laws and showing symmetry. The fact that they can still be distinguished points to an asymmetry, a break of symmetry. The study of characteristic symmetries and symmetry breaks supplies an important tool for achieving a characteristic unification of natural forces.
Since the middle of the twentieth century, physics discerns four fundamental specific interactions. These are gravity and electromagnetic interaction besides the strong and weak nuclear forces. Later on, the electromagnetic and weak forces were united into the electroweak interaction, whereas the strong force is reducible to the colour force between quarks. In the near future, physicists expect to be able to unite the colour force with the electroweak interaction. The ultimate goal, the unification of all four forces is still far away.
About 1900, the ‘electromagnetic world view’ supposed that all physical and chemical interactions could be reduced to electromagnetism.[13] Just like the modern standard model, it aimed at deducing the (rest-) mass of elementary particles from this supposed fundamental interaction.[14]
These characteristic interactions are distinguished in several ways, first by the particles between which they act. Gravity acts between all particles, the colour force only between quarks, and the strong force only between particles composed from quarks. A process involving a neutrino is weak, but the reverse is not always true.
Another difference is their relative strength. Gravity is weakest and only plays a part because it cannot be neutralized. It manifests itself only on a macroscopic scale. The other forces are so effectively neutralized, that the electrical interaction was largely unknown until the eighteenth century, and the nuclear forces were not discovered before the twentieth century. Gravity conditions the existence of stars and systems of stars.
Next, gravity and electromagnetic interaction have an infinite range, whereas the other forces do not act beyond the limits of an atomic nucleus. For gravity and electricity the inverse-square law is valid (the force is inversely proportional to the square of the distance from a point-like source). This law is classically expressed in Isaac Newton’s law of gravity and Charles Coulomb’s electrostatic law, with mass respectively charge acting as a measure of the strength of the source. A comparable law does not apply to the other forces, and the lepton and baryon numbers do not act as a measure for their sources. As a function of distance, the weak interaction decreases much faster than quadratically. The colour force is nearly constant over a short distance (of the order of the size of a nucleus), beyond which it decreases abruptly to zero.
The various interactions also differ because of the field particles involved. Each fundamental interaction corresponds to a field in which quantized currents occur. For gravity, this is an unconfirmed hypothesis. Field particles have an integral spin and they are bosons (3.2, 4.4). If the spin is even (0 of 2), it concerns an attractive force between equal particles and a repulsive force between opposite particles (if applicable). For an uneven spin it is the other way around. The larger the field particle’s rest mass, the shorter is the range of the interaction. If the rest mass of the field particles is zero (as is the case with photons and gravitons), the range is infinite. Unless mentioned otherwise, the field particles are electrically neutral.
The mean lifetime of spontaneous decay differs widely. The stronger the interaction causing a transition, the faster the system changes. If a particle decays because of the colour force or strong force, it happens in a very short time (of the order of 10-23 to 10-19 sec). Particles decaying due to weak interaction have a relatively long lifetime (10-12 sec for a tauon up to 900 sec for a free neutron). Electromagnetic interaction is more or less between.
In high-energy physics, symmetry considerations and group theory play an important part in the analysis of collision processes. New properties like isospin and strangeness have led to the introduction of groups named SU(2) and SU(3) and the discovery of at first three, later six quarks. (SU(3) means special unitary group with three variables. The particles in a representation of this group have the same spin and parity (together one variable), but different values for strangeness and one component of isospin.)
Quantum electrodynamics reached its summit shortly after the Second World War, but the other interactions are less manageable, being developed only after 1970. Now each field has a symmetry property called gauge invariance, related to the laws of conservation of electric charge, baryon number, and lepton number. Symmetry is as much an empirical property as any other one. After the discovery of antiparticles it was assumed that charge conjugation C (symmetry with respect to the interchange of a particle with its antiparticle), parity P (mirror symmetry), and time reversal T are properties of all fundamental interactions. Since 1956, it is experimentally established that β-decay has no mirror symmetry unless combined with charge conjugation (CP). In 1964 it turned out that weak interactions are only symmetrical with respect to the product CPT, such that even T alone is no longer universally valid.
The appropriate theory is called the standard model, since the discovery of the J/y particle in 1974 explaining successfully a number of properties and interactions of subatomic particles. Dating from the seventies of the twentieth century, it was tentatively confirmed in 2012 by the experimental discovery of Peter Higgs’ particle, already predicted in 1964. Tentatively: the model does not include gravity, and some recently discovered properties of neutrinos do not quite fit into it. The general theory of relativity is still at variance with quantum electrodynamics, with the electroweak theory of Steven Weinberg and Abdus Salam, as well as with quantum chromodynamics.[15]
These fundamental interactions are specifications of the abstract concept of interaction being the universal physical and chemical relation. Their laws, like those of Maxwell for electromagnetism, form a specific set, which may be considered a character. But this character does not determine a class of things or events, but a class of relations.
Encyclopaedia of relations and characters. 5. Physical characters
5.2. The character of electrons
Ontology, the doctrine of on (or ontos, Greek for being), aims to answer the question of how matter is composed according to present-day insights. Since the beginning of the twentieth century, many kinds of particles received names ending with on, like electron, proton, neutron, and photon. At first sight, the relation with ontology seems to be obvious. Historically the suffix –on goes back to the electron. Whether the connection with ontology has really played a part is unclear.[16] The word electron comes from the Greek word for amber or fossilized resin, since antiquity known for its properties that we now recognize as static electricity. From 1874, Stoney used the word electron for the elementary amount of charge. Only in the twentieth century, electron became the name of the particle identified by Thomson in 1897. Rutherford introduced the names proton and neutron in 1920 (long before the actual discovery of the neutron in 1932). Lewis baptized the photon in 1926, 21 years after Einstein proposed its existence. Yet, not many physicists would affirm that an electron is the essence of electricity, that the proton forms the primeval matter, that the neutron and its little brother, the neutrino, have the nature of being neutral, or that in the photon light comes into being, and in the phonon sound. In pion, muon, tauon, and kaon, on is no more than a suffix of the letters π, μ, τ and K, whereas Paul Dirac baptized fermion and boson after Enrico Fermi and Satyendra Bose. In 1833 Michael Faraday, advised by William Whewell, introduced the words ion, kation, and anion, referring to the Greek word for to go. In an electrolyte, an ion moves from or to an electrode, an anode or cathode (names proposed by Whewell as well). An intruder is the positive electron. Meant as positon, the positron received an additional r, possibly under the influence of electron or new words like magnetron and cyclotron, which however are machines, not particles.
Only after 1925 quantum physics and high-energy physics allowed of the study of the characters of elementary physical things. Most characters have been discovered after 1930. But the discovery of the electron (1897), of the internal structure of an atom, composed from a nucleus and a number of electrons (1911), and of the photon (1905) preceded the quantum era. These are typical examples of characters founded in the quantitative, spatial, and kinetic projections of physical interaction. In section 5.1, these projections were pointed out to be respectively energy; force or field; and current.
An electron is characterized by a specific amount of mass and charge and is therefore quantitatively founded. The foundation is not in the quantitative relation frame itself (because that is not physical), but in the most important quantitative projection of the physical relation frame. This is energy, expressing the quantity of interaction. Like other particles, an electron has a typical rest energy, besides specific values for its electric charge, magnetic moment, and lepton number.
In chapter 4, I argued that an electron has the character of a wave packet as well, kinetically qualified and spatially founded, anticipating physical interactions. An electron has a specific physical character and a generic kinetic character. The two characters are interlaced within the at first sight simple electron. The combined dual character is called the wave-particle duality. Electrons share it with all other elementary particles. As a consequence of the kinetic character and the inherent Heisenberg relations, the position of an electron cannot be determined much better than within 10-10 m (about the size of a hydrogen atom). But the physical character implies that the electron’s collision diameter (being a measure of its physical size) is less than 10-17 m.
Except for quarks, all quantitatively founded particles are leptons, to be distinguished from field particles and baryons (5.3, 5.4). Leptons are not susceptible to the strong nuclear force or the colour force. They are subject to the weak force, sometimes to electromagnetic interaction, and like all matter to gravity. Each lepton has a positive or negative value for the lepton number (L), which significance appears in the occurrence or non-occurrence of certain processes. Each process is subject to the law of conservation of lepton number, i.e., the total lepton number cannot change. For instance, a neutron (L=0) does not decay into a proton and an electron, but into a proton (L=0), an electron (L=1) and an antineutrino (L=-1). The lepton number is just as characteristic for a particle as its electric charge. For non-leptons the lepton number is 0, for leptons it is +1 or -1.
Leptons satisfy a number of characteristic laws. Each particle has an electric charge being an integral multiple (positive, negative or zero) of the elementary charge. Each particle corresponds with an antiparticle having exactly the same rest mass and lifetime, but opposite values for charge and lepton number. Having a half-integral spin, leptons are fermions satisfying the exclusion principle and the characteristic Fermi-Dirac statistics (4.3, 5.5).
Three generations of leptons are known, each consisting of a negatively charged particle, a neutrino, and their antiparticles. These generations are related to similar generations of quarks (5.3). A tauon decays spontaneously into a muon, and a muon into an electron. Both are weak processes, in which simultaneously a neutrino and an anti-neutrino are emitted.
The leptons display little diversity, their number is exactly 6. Like their diversity, the variation of leptons is restricted. It only concerns their external relations: their position, their linear and angular momentum, and the orientation of their magnetic moment or spin relative to an external magnetic field.
This description emphasizes the quantitative aspect of leptons. But leptons are first of all physically qualified. Their specific character determines how they interact by electroweak interaction with each other and with other physical subjects, influencing their coming into being, change and decay.
Electrons are by far the most important leptons, having the disposition to become part of systems like atoms, molecules, and solids. The other leptons only play a part in high-energy processes. In order to stress the distinction between a definition and a character as a set of laws, I shall dwell a little longer on hundred years of development of our knowledge of the electron.[17]
Although more scientists were involved, it is generally accepted that Joseph J. Thomson in 1897 discovered the electron. He identified his cathode ray as a stream of particles and established roughly the ratio e/m of their charge e and mass m, by measuring how an electric and/or magnetic field deflects the cathode rays. In 1899 Thomson determined the value of e separately, allowing him to calculate the value of m. Since then, the values of m and e, which may be considered as defining the electron, are determined with increasing precision. In particular Robert Millikan did epoch-making work, between 1909 and 1916. Almost simultaneously with Thomson, Hendrik Lorentz observed that the Zeeman effect (1896) could be explained by the presence in atoms of charged particles having the same value for e/m as the electron. Shortly afterwards, the particles emerging from β-radioactivity and the photoelectric effect were identified as electrons.
The mass m depends on the electron’s speed, as was first established experimentally by Walter Kaufmann, later theoretically by Albert Einstein. Since then, instead of the mass m the rest mass mo is characteristic for a particle. Between 1911 and 1913, Ernest Rutherford and Niels Bohr developed the atomic model in which electrons move around a much more massive nucleus. The orbital angular momentum turned out to be quantized. In 1923 Louis de Broglie made clear that an electron sometimes behaves like a wave, interpreted as the bearer of probability by Max Born in 1926 (4.3). In 1925, Samuel Goudsmit and George Uhlenbeck suggested a new property, half-integral spin, connected to the electron’s intrinsic magnetic moment. In the same year, Wolfgang Pauli discovered the exclusion principle. Enrico Fermi and Paul Dirac derived the corresponding statistics in 1926. Since then, the electron is a fermion, playing a decisive part in all properties of matter (4.3, 5.3, 5.5). In 1930 it became clear that in β-radioactivity besides the electron a neutrino emerges from a nucleus. Neutrinos were later on recognized as members of the lepton family. β-radioactivity is not caused by electromagnetic interaction, but by the weak nuclear force. Electrons turned out not to be susceptible to strong nuclear forces. In 1931 the electron got a brother, the positron or anti-electron. This affirmed that an electron has no eternal life, but may be created or annihilated together with a positron. In β-radioactivity, too, an electron emerges or disappears (in a nucleus, an electron cannot exist as an independent particle), but apart from these processes, the electron is the most stable particle we know besides the proton. According to Paul Dirac, the positron is a hole in the nether world of an infinite number of electrons having a negative energy (4.3). In 1953, the law of conservation of lepton number was discovered. After the second world war, Richard Feynman, Julian Schwinger, and Shin’ichiro Tomonaga developed quantum electrodynamics. This is a field theory in which the physical vacuum is not empty, but is the stage of spontaneous creations and annihilations of virtual electron-positron pairs. Interaction with other (sometimes virtual) particles is partly responsible for the properties of each particle. As a top performance counts the theoretical calculation of the magnetic moment of the electron in eleven decimals, a precision only surpassed by the experimental measurement of the same quantity in twelve decimals. Moreover, the two values differ only in the eleventh decimal, within the theoretical margin of error. ‘The agreement between experiment and theory shown by these examples, the highest point in precision reached anywhere in the domain of particles and fields, ranks among the highest achievements of tentieth-century physics.’[18] Finally, the electron got two cousins, the muon and the tauon.
Besides these scientific developments, electronics revolutionized the world of communication, information, and control.
Since Joseph Thomson’s discovery, the concept of an electron has been changed and expanded considerably. Besides being a particle having mass and charge, it is now a wave, a top, a magnet, and a fermion, half of a twin, and a lepton. Yet, few people doubt that we are still talking about the same electron.
What the essence of an electron is appears to be a hard question, if ever posed. It may very well be a meaningless question. But we achieve a growing insight into the laws constituting the electron’s character, determining the electron’s relations with other things and the processes in which it is involved. The electron’s charge means that two electrons exert a force on each other according to the laws of Charles Coulomb and Hendrik Lorentz. The mass follows from the electron’s acceleration in an electric and/or magnetic field, according to James Clerk Maxwell’s laws. The lepton number makes only sense because of the law of conservation of lepton number, allowing of some processes and prohibiting others. Electrons are fermions, satisfying the exclusion principle and the distribution law of Enrico Fermi and Paul Dirac.
The character of electrons is not logically given by a definition, but physically by a specific set of laws, which are successively discovered and systematically connected by experimental and theoretical research.
An electron is to be considered an individual satisfying the character described above. A much-heard objection to the assignment of individuality to electrons and other elementary particles is the impossibility to distinguish one electron from another. Electrons are characteristically equal to each other, having much less variability than plants or animals, even less than atoms.
This objection can be retraced to the still influential worldview of mechanism. This worldview assumed each particle to be identifiable by objective kinetic properties like its position and velocity at a certain time. Quantum physics observes that the identification of physically qualified things requires a physical interaction. In general, this interaction influences the particle’s position and momentum (4.3). Therefore, the electron’s position and momentum cannot be determined with unlimited accuracy, as follows from Werner Heisenberg’s relations. This means that identification in a mechanistic sense is not always possible. Yet, in an interaction such as a measurement, an electron manifests itself as an individual.
If an electron is part of an atom, it can be identified by its state, because the exclusion principle precludes that two electrons would occupy the same state. The two electrons in the helium atom exchange their states continuously without changing the state of the atom as a whole. But it cannot be doubted that at any moment there are two electrons, each with its own mass, charge and magnetic moment. For instance, in the calculation of the energy levels the mutual repulsion of the two electrons plays an important part.
The individual existence of a bound electron depends on the binding energy being much smaller than its rest energy. Binding energy equals the energy needed to liberate an electron from an atom. It varies from a few eV (the outer electrons) to several tens of keV (the inner electrons in a heavy element like uranium). The electron’s rest mass is about 0.5 MeV, much larger than its binding energy in an atom (13.6 eV). To keep an electron as an independent particle in a nucleus would require a binding energy of more than 100 MeV, much more than the electron’s rest energy of 0,5 MeV. For this reason, physicists argue that electrons in a nucleus cannot exist as independent, individual particles, like they are in an atom’s shell.
In contrast, protons and neutrons in a nucleus satisfy the criterion that an independent particle has a rest energy substantially larger than the binding energy. Their binding energy is about 8 MeV, their rest energy is almost 1000 MeV. A nucleus is capable of emitting an electron (this is β-radioactivity). The electron’s existence starts at the emission and eventually ends at the absorption by a nucleus. Because of the law of conservation of lepton number, the emission of an electron is accompanied by the emission of an anti-neutrino, and at the absorption of an electron a neutrino is emitted. This would not be the case if the electron could exist as an independent particle in the nucleus. Neutrino’s are stable, their rest mass is zero or very small, and they are only susceptible to weak interaction. Neutrino’s and anti-neutrino’s differ by their parity, the one being left handed, and the other right handed. (This distinction is only possible for particles having zero restmass. If neutrinos have a rest mass different from zero, as some experiments suggest, the theory has to be adapted with respect to parity). That the three neutrinos differ from each other is established by processes in which they are or are not involved, but in what respect they differ is less clear. For some time, physicists expected the existence of a fourth generation, but the standard model restricts itself to three, because astrophysical cosmology implies the existence of at most three different types of neutrino’s with their antiparticles.
More than as free particles, the electrons display their characteristic properties as components of atoms, molecules and solids, as well as in processes. The half-integral spin of electrons was discovered in the investigation of atomic spectra. The electron’s fermion character largely determines the shell structure of atoms. In 1930, Wolfgang Pauli suggested the existence of the neutrino because of the character of β-radioactivity. The lepton number is discovered by an analysis of specific nuclear reactions.
Electrons have the affinity or propensity of functioning as a component of atoms and molecules because electrons share electromagnetic interaction with nuclei. Protons and electrons have the same but opposite charge, allowing of the formation of neutral atoms, molecules and solids. Electric neutrality is of tremendous importance for the stability of these systems. This tertiary characteristic determines the meaning of electrons in the cosmos.
Encyclopaedia of relations and characters. 5. Physical characters
5.3. The quantum ladder
An important spatial manifestation of interaction is the force between two spatially separated bodies. An atom or molecule having a spatially founded character consists of a number of nuclei and electrons kept together by the electromagnetic force. More generally, any interaction is spatially projected on a field.
Sometimes a field can be described as the spatial derivative of the potential energy. A set of particles constitutes a stable system if the potential energy has an appropriate shape, characteristic for the spatially founded structure. In a spatially founded structure, the relative spatial positions of the components are characteristic, even if their relative motions are taken care of. Atoms have a spherical symmetry restricting the motions of the electrons. In a molecule, the atoms or ions have characteristic relative positions, often with a specific symmetry. In each spatially founded character a number of quantitatively founded characters are interlaced.
It is a remarkable fact that in an atom the nucleus acts like a quantitatively founded character, whereas the nucleus itself is a spatial configuration of protons and neutrons kept together by forces. The nucleus itself has a spatially founded character, but in the atom it has the disposition to act as a whole, characterized by its mass, charge, and magnetic moment. Similarly, a molecule or a crystal is a system consisting of a number of atoms or ions and electrons, all acting like quantitatively founded particles. Externally, the nucleus in an atom and the atoms or ions in a molecule act as quantitatively founded wholes, as units, while preserving their own internal spatially founded structure.
However, an atom bound in a molecule is not completely the same as a free atom. In contrast to a nucleus, a free atom is electrically neutral and it has a spherical symmetry. Consequently, it cannot easily interact with other atoms or molecules, except in collisions. In order to become a part of a molecule, an atom has to open up its tertiary character. This can be done in various ways. The atom may absorb or eject an electron, becoming an ion. A common salt molecule does not consist of a neutral sodium atom and a neutral chlorine atom, but of a positive sodium ion and a negative chlorine ion, attracting each other by the Coulomb force. This is called heteropolar or ionic bonding. Any change of the spherical symmetry of the atom’s electron cloud leads to the relatively weak Van der Waals interaction. A very strong bond results if two atoms share an electron pair. This homopolar or covalent bond occurs in diatomic molecules like hydrogen, oxygen, and nitrogen, in diamond, and in many carbon compounds. Finally, especially in organic chemistry, the hydrogen bond is important. It means the sharing of a proton by two atom groups.
The possibility of being bound into a larger configuration is a very significant tertiary characteristic of many physically qualified systems, determining their meaning in the cosmos.
The first stable system studied by physics is the solar system, in the seventeenth century investigated by Johannes Kepler, Galileo Galilei, Christiaan Huygens, and Isaac Newton. The law of gravity, mechanical laws of motion, and conservation laws determine the character of planetary motion. The solar system is not unique, there are more stars with planets, and the same character applies to a planet with its moons, or to a double star. Any model of the system presupposes its isolation from the rest of the world, which is only approximately the case. This approximation is pretty good for the solar system, less good for the system of the sun and each planet apart, and pretty bad for the system of earth and moon.
Spatially founded physical characters display a large disparity. Various specific subtypes appear. According to the standard model (5.1), these characters form a hierarchy, called the quantum ladder.[19] At the first rung there are six (or eighteen, see below) different quarks, with the antiquarks grouped into three generations related to those of leptons, as follows from analogous processes.
Like a lepton, a quark is quantitatively founded, it has no structure. But a quark cannot exist as a free particle. Quarks are confined as a duo in a meson (e.g., a pion) or as a trio in a baryon (e.g., a proton or a neutron) or an antibaryon. Confinement is a tertiary characteristic, but it does not stand apart from the secondary characteristics of quarks, their quantitative properties. Whereas quarks have a charge of 1/3 or 2/3 times the elementary charge, their combinations satisfy the law that the electric charge of a free particle can only be an integral multiple of the elementary charge. Likewise, in confinement the sum of the baryon numbers (for quarks ±1/3 of ±2/3) always yields an integral number. For a meson this number is 0, for a baryon it is +1, for an antibaryon it is -1.
Between quarks the colour force is acting, mediated by gluons. The colour force has no effect on leptons and is related to the strong force between baryons. In a meson the colour force between two quarks hardly depends on their mutual distance, meaning that they cannot be torn apart. If a meson breaks apart, the result is not two separate quarks but two quark-antiquark pairs.
Quarks are fermions, they satisfy the exclusion principle. In a meson or baryon, two identical quarks cannot occupy the same state. But an omega particle (sss) consists of three strange quarks having the same spin. This is possible because each quark exists in three variants, each indicated by a ‘colour’ besides six ‘flavours’. For the antiquarks three complementary colours are available. The metaphor of ‘colour’ is chosen because the colours are able to neutralize each other, like ordinary colours can be combined to produce white. This can be done in two ways, in a duo by adding a colour to its anticolour, or in a trio by adding three different colours or anticolours. The law that mesons and baryons must be coulorless yields an additional restriction on the number of possible combinations of quarks. A white particle is neutral with respect to the colour force, like an uncharged particle is neutral with respect to the Coulomb force. Nevertheless, an electrically neutral particle may exert electromagnetic interaction because of its magnetic moment. This applies e.g. to a neutron, but not to a neutrino. Similarly, by the exchange of mesons, the colour force manifests itself as the strong nuclear force acting between baryons, even if baryons are ‘white’. Two quarks interact by exchanging gluons, thereby changing of colour.
The twentieth-century standard model has no solution to a number of problems. Why only three generations? If all matter above the level of hadrons consists of particles from the first generation, what is the tertiary disposition of the particles of the second and third generation? Should the particles of the second and third generation be considered excited states of those of the first generation? Why does each generation consist of two quarks and two leptons (with corresponding antiparticles)? What is the origin of the mass differences between various leptons and quarks?
The last question might be the only one to receive an answer in the twenty-first century, when the existence of Peter Higgs’ particle and its mass were experimentally established (2012). For the other problems, at the end of the twentieth century no experiment is proposed providing sufficient information to suggest a solution.
The second level of the hierarchy consists of hadrons, baryons having half integral spin and mesons having integral spin. Although the combination of quarks is subject to severe restrictions, there are quite a few different hadrons. A proton consists of two up and one down quark (uud), and a neutron is composed of one up and two down quarks (udd). These two nucleons are the lightest baryons, all others being called hyperons. A pion consists of dd, uu (charge 0), du (–e) or ud (+e). As a free particle, only the proton is stable, whereas the neutron is stable within a nucleus. A free neutron decays into a proton, an electron and an antineutrino. The law of conservation of baryon number is responsible for the stability of the proton, being the baryon with the lowest rest energy. All other hadrons have a very short mean lifetime, a free neutron having the longest (900 sec). Their diversity is much larger than that of leptons and of quarks. Based on symmetry relations, group theory orders the hadrons into sets of e.g. eight baryons or ten mesons.
For a large part, the interaction of hadrons consists of rearranging quarks accompanied by the creation and annihilation of quark-antiquark pairs and lepton-antilepton pairs. The general laws of conservation of energy, linear and angular momentum, the specific laws of conservation of electric charge, lepton number and baryon number, and the laws restricting electric charge and baryon number to integral values, characterize the possible processes between hadrons in a quantitative sense. Besides, the fields described by quantum electrodynamics and quantum chromodynamics characterize these processes in a spatial sense, and the exchange of field particles in a kinetic way.
Atomic nuclei constitute the third layer in the hierarchy. With the exception of hydrogen, each nucleus consists of protons and neutrons, determining together the coherence, binding energy, stability, and lifetime of the nucleus. The mass of the nucleus is the sum of the masses of the nucleons less the mass equivalent to the binding energy. Decisive is the balance of the repulsive electric force between the protons and the attractive strong nuclear force binding the nucleons independent of their electric charge. In heavy nuclei, the surplus of neutrons compensates for the mutual repulsion of the protons. To a large extent, the exclusion principle applied to neutrons and protons separately determines the stability of the nucleus and its internal energy states.
The nuclear force is negligible for the external functioning of a nucleus in an atom or molecule. Only the mass of the nucleus, its electric charge and its magnetic moment are relevant for its external relations. Omitting the magnetic moment leads to two diversities in nuclei.
The first diversity concerns the number of protons. In a neutral atom it equals the number of electrons determining the atom’s chemical propensities. The nuclear charge together with the exclusion principle dominates the energy states of the electrons, hence the position of the atom in the periodic system of elements.
The second diversity concerns the number of neutrons in the nucleus. Atoms having the same number of protons but differing in neutron number are called isotopes, because they have the same position (topos) in the periodic system. They have similar chemical propensities.
The diversity of atomic nuclei is represented in a two-dimensional diagram, a configuration space. The horizontal axis represents the number of protons (Z = atomic number), the vertical axis the number of neutrons (N). In this diagram the isotopes (same Z, different N) are positioned above each other. The configuration space is mostly empty, because only a restricted number of combinations of Z and N lead to stable or metastable (radioactive) nuclei. The periodic system of elements is a two-dimensional diagram as well. Dmitri Mendelejev ordered the elements in a sequence according to a secondary property (the atomic mass) and below each other according to tertiary propensities (the affinity of atoms to form molecules, in particular compounds with hydrogen and oxygen). Later on, the atomic mass was replaced by the atomic number Z. However, quantum physics made clear that the atomic chemical properties are not due to the nuclei, but to the electrons subject to the exclusion principle. The vertical ordering in the periodic system concerns the configuration of the electronic shells. In particular the electrons in the outer shells determine the tertiary chemical propensities.
This is not an ordering according to a definition in terms of necessary and sufficient properties distinguishing one element from the other, but according to their characters. The properties do not define a character, as essentialism assumes, but the character (a set of laws) determines the properties and propensities of the atoms.
In the hierarchical order, we find globally an increase of spatial dimensions, diversity of characters and variation within a character, besides a decrease of the binding energy per particle and the significance of strong and weak nuclear forces. For the characters of atoms, molecules, and crystals, only the electromagnetic interaction is relevant.
The internal variation of a spatially founded character is very large. Quantum physics describes the internal states with the help of David Hilbert’s space, having the eigenvectors of William Hamilton’s operator as a base (2.3). A Hilbert space describes the ensemble of possibilities (in particular the energy eigenvalues) determined by the system’s character. In turn, the atom or molecule’s character itself is represented by Edwin Schrödinger’s time-independent equation. This equation is exactly solvable only in the case of two interacting particles, like the hydrogen atom, the helium ion, the lithium ion, and positronium. (Positronium is a short living composite of an electron and a positron, the only spatially founded structure entirely consisting of leptons.) In other cases, the equation serves as a starting point for approximate solutions, usually only manageable with the help of a large computer.
The hierarchical connection implies that the spatially founded characters are successively interlaced, for example nucleons in a nucleus, or the nucleus in an atom, or atoms in a molecule. Besides, these characters are interlaced with kinetically, spatially, and quantitatively qualified characters, and often with biotically qualified characters as well.
The characters described depend strongly on a number of natural constants, which value can be established only experimentally, not theoretically. Among others, this concerns the gravitational constant G, the speed of light c, Planck’s constant h and the elementary electric charge e, or combinations like the fine structure constant (2pe2/hc=1/137.036) and the mass ratio of the proton and the electron (1836.104). If the constants of nature would be slightly different, both nuclear properties and chemical properties would change drastically.[20]
The quantum ladder is of a physical and chemical nature. As an ordering principle, the ladder has a few flaws from a logical point of view. For instance, the proton occurs on three different levels, as a baryon, as a nucleus, and as an ion. The atoms of the noble gases are their molecules as well. This is irrelevant for their character. The character of a proton consists of the specific laws to which it is subjected. The classification of baryons, nuclei, or ions is not a characterization, and a proton is not ‘essentially’ a baryon and ‘accidentally’ a nucleus or an ion.
The number of molecular characters is enormous and no universal classification of molecules exists. In particular the characters in which carbon is an important element show a large diversity.
The molecular formula indicates the number of atoms of each element in a molecule. Besides, the characteristic spatial structure of a molecule determines its chemical properties. The composition of a methane molecule is given by the formula CH4, but it is no less significant that the methane molecule has the symmetrical shape of a regular tetrahedron, with the carbon atom at the centre and the four hydrogen atoms at the vertices. The V-like shape of a water molecule (the three atoms do not lie on a straight line, but form a characteristic angle of 105o) causes the molecule to have a permanent electric dipole moment, explaining many of the exceptional properties of water. Isomers are materials having the same molecular formula but different spatial orderings, hence different chemical properties. Like the symmetry between a left and a right glove, the spatial symmetry property of mirroring leads to the distinction of dextro- and laevo-molecules.
The symmetry characteristic for the generic (physical) character is an emergent property, in general irreducible to the characters of the composing systems. Conversely, the original symmetry of the composing systems is broken. In methane, the outer shells of the carbon atom have exchanged their spherical symmetry for the tetrahedron symmetry of the molecule. Symmetry breaks also occur in fields. The symmetry of strong nuclear interaction is broken by electroweak interaction. For the strong interaction, the proton and the neutron are symmetrical particles having the same rest energy, but the electroweak interaction causes the neutron to have a slightly larger rest energy and to be metastable as a free particle.
From quantum field theory, in principle it should be possible to derive successively the emergent properties of particles and their spatially founded composites. This is the synthetic, reductionist or fundamentalist trend, constructing complicated structures from simpler ones. It cannot explain symmetry breaks.[21] For practical reasons too, a synthetic approach is usually impossible. The alternative is the analytical or holistic method, in which the symmetry break is explained from the empirically established symmetry of the original character. Symmetries and other structural properties are usually a posteriori explained, and hardly ever a priori derived. However, analysis and synthesis are not contrary but complementary methods.
Climbing the quantum ladder, complexity seems to increase. On second thoughts, complexity is not a clear concept. An atom would be more complex than a nucleus and a molecule even more. However, in the character of a hydrogen atom or a hydrogen molecule, weak and strong interactions are negligible, and the complex spatially founded nuclear structure is reduced to the far simpler quantitatively founded character of a particle having mass, charge, and magnetic moment. Moreover, a uranium nucleus consisting of 92 protons and 146 neutrons has a much more complicated character than a hydrogen molecule consisting of two protons and two electrons, having a position two levels higher on the quantum ladder.
Inward a system is more complex than outward. An atom consists of a nucleus and a number of electrons, grouped into shells. If a shell is completely filled in conformity with the exclusion principle, it is chemically inert, serving mostly to reduce the effective nuclear charge. A small number of electrons in partially occupied shells determines the atom’s chemical propensities. Consequently, an atom of a noble gas, having only completely occupied shells, is less complicated than an atom having one or two electrons less. The complexity of molecules increases if the number of atoms increases. But some very large organic molecules consist of a repetition of similar atomic groups and are not particularly complex.
In fact, there does not exist an unequivocal criterion for complexity.
An important property of hierarchically ordered characters is that for the explanation of a character it is sufficient to descend to the next lower level. For the understanding of molecules, a chemist needs the atomic theory, but he does not need to know much about nuclear physics. A molecular biologist is acquainted with the chemical molecular theory, but his knowledge of atomic theory may be rather superficial. This is possible because of the phenomenon that a physical character interlaced in another one both keeps its properties and hides them.
Each system derives its stability from an internal equilibrium that is hardly observable from without. The nuclear forces do not range outside the nucleus. Strong electric forces bind an atom or a molecule, but as a whole it is electrically neutral. The strong internal equilibrium and the weak remaining external action are together characteristic for a stable physical system. If a system exerts a force on another one, it experiences an equal external force. This external force should be much smaller than the internal forces keeping the system intact, otherwise it will be torn apart. In a collision between two molecules, the external interaction may be strong enough to disturb the internal equilibrium, such that the molecules fall apart. Eventually, a new molecule with a different character emerges. Because the mean collision energy is proportional to the temperature, the stability of molecules and crystals depend on this parameter. In the sun’s atmosphere no molecules exist and in its centre no atoms occur. In a very hot star like a neutron star, even nuclei cannot exist.
Hence, a stable physical or chemical system is relatively inactive. It looks like an isolated system. This is radically different from plants and animals that can never be isolated from their environment. The internal physical equilibrium of a plant or an animal is maintained by metabolism, the continuous flow of energy and matter through the organism.
Encyclopaedia of relations and characters. 5. Physical characters
5.4. Individualized currents
I consider the primarily physical character of a photon to be secondarily kinetically founded. A photon is a field particle in the electromagnetic interaction, transporting energy, as well as linear and angular momentum from one spatially founded system to another. Besides photons, nuclear physics recognizes gluons being field particles for the colour force, mesons for the strong nuclear force, and three types of vector bosons for the weak interaction. The existence of the graviton, the field particle for gravity, has not been experimentally confirmed. All these interaction particles have an integral spin and are bosons. Hence, these are not subject to the exclusion principle. Field particles are not quantitatively or spatially founded things, but individualized characteristic currents, hence kinetically founded ‘quasiparticles’. Bosons carry forces, whereas fermions feel or experience forces.
By absorbing a photon, an atom comes into an excited state, i.e. a metastable state at a higher energy than the ground state. Whereas an atom in its ground state can be considered an isolated system, an excited atom is always surrounded by the electromagnetic field.
A photon is a wave packet, like an electron it has a dual character. Yet there is a difference. Whereas the electron’s motion has a wave character, a photon is a current in an electromagnetic field, a current being a kinetic projection of physical interaction. With respect to electrons, the wave motion only determines the probability of what will happen in a future interaction. In a photon, besides determining a similar probability, the wave consists of periodically changing electric and magnetic fields. A real particle’s wave motion lacks a substratum, there is no characteristic medium in which it moves, and its velocity is variable. Moving quasiparticles have a substratum, and their wave velocity is a property of the medium. The medium for light in empty space is the electromagnetic field, all photons having the same speed independent of any reference system.
Each inorganic solid consists of crystals, sometimes microscopically small. Amorphous solid matter does not exist or is very rare. The ground state of a crystal is the hypothetical state at zero temperature. At higher temperatures, each solid is in an excited state, determined by the presence of quasiparticles.
The crystal symmetry, adequately described by the theory of groups, has two or three levels. First, each crystal is composed of space filling unit cells. All unit cells of a crystal are equal to each other, containing the same number of atoms, ions or molecules in the same configuration. A characteristic lattice point indicates the position of a unit cell. The lattice points constitute a Bravais lattice (called after Auguste Bravais), representing the crystal’s translation symmetry. Only fourteen types of Bravais lattices are mathematically possible and realized in nature. Each lattice allows of some variation, for instance with respect to the mutual distance of the lattice points, as is seen when the crystal expands on heating. Because each crystal is finite, the translation symmetry is restricted and the surface structure of a crystal may be quite different from the crystal structure.
Second, the unit cell has a symmetry of its own, superposed on the translation symmetry of the Bravais lattice. The cell may be symmetrical with respect to reflection, rotation, or inversion. The combined symmetry determines how the crystal scatters X-rays or neutrons, presenting a means to investigate the crystalline structure empirically. Hence, the long distance spatial order of a crystal evokes a long time kinetic order of specific waves.
Third, in some materials we find an additional ordering, for instance that of the magnetic moments of electrons or atoms in a ferromagnet. Like the first one, this is a long-distance ordering. It involves an interaction that is not restricted to nearest neighbours. It may extend over many millions of atomic distances.
The atoms in a crystal oscillate around their equilibrium positions. These elastic oscillations are transferred from one atom to the next like a sound wave, and because the crystal has a finite volume, this is a stationary wave, a collective oscillation. The crystal as a whole is in an elastic oscillation, having a kinetically founded character. These waves have a broad spectrum of frequencies and wavelengths, being sampled into wave packets. In analogy with light, these field particles are called sound quanta or phonons.
Like the electrons in a metal, the phonons act like particles in a box (4.4). Otherwise they differ widely. The number of electrons is constant, but the number of phonons increases strongly at increasing temperature. Like all quasiparticles, the phonons are bosons, not being subject to the exclusion principle. The mean kinetic energy of the electrons hardly depends on temperature, and their specific heat is only measurable at a low temperature. In contrast, the mean kinetic energy of phonons strongly depends on temperature, and the phonon gas dominates the specific heat of solids. At a low temperature this increases proportional to T3 to become constant at a higher temperature. Peter Debije’s theory (originally 1912, later adapted) explains this from the wave and boson character of phonons and the periodic character of the crystalline structure.
In a solid or liquid, besides phonons many other quantized excitations occur, corresponding, for instance, with magnetization waves or spin waves. The interactions of quasiparticles and electrons cause the photoelectric effect and transport phenomena like electric resistance and thermo-electricity.
The specific properties of some superconductors can be described with the help of quasiparticles. (This applies to the superconducting metals and alloys known before 1986. For the ceramic superconductors, discovered since 1986, this explanation is not sufficient.) In a superconductor two electrons constitute a pair called after Leon Cooper. This is a pair of electrons in a bound state, such that both the total linear momentum and the total angular momentum are zero. The two electrons are not necessarily close to each other. Superconductivity is a phenomenon with many variants, and the theory is far from complete.
Superconductivity is a collective phenomenon in which the wave functions of several particles are macroscopically coherent. There is no internal dissipation of energy. It appears that on a macroscopic scale the existence of kinetically founded characters is only possible if there is no decoherence (4.3). Therefore, kinetically founded physical characters on a macroscopic scale are quite exceptional.
Encyclopaedia of relations and characters. 5. Physical characters
5.5. Aggregates and statistics
We have now discussed three types of physically qualified characters, but this does not exhaust the theory of matter. The inorganic sciences acknowledge many kinds of mixtures, aggregates, alloys or solutions. In nature, these are more abundant than pure matter. Often, the possibility to form a mixture is restricted and some substances do not mix at all. In order to form a stable aggregate, the components must be tuned to each other. Typical for an aggregate is that the characteristic magnitudes (like pressure, volume, and temperature for a gas) are variable within a considerable margin, even if there is a lawful connection between these magnitudes.
Continuous variability provides quantum physics with a criterion to distinguish a composite thing (with a character of its own) from an aggregate. Consider the interaction between an electron and a proton. In the most extreme case this leads to the absorption of the electron and the transformation of the proton into a neutron (releasing a neutrino). At a lower energy, the interaction may lead to a bound state having the character of a hydrogen atom if the total energy (kinetic and potential) is negative. Finally, if the total energy is positive, we have an unbound state, an aggregate. In the bound state the energy can only have discrete values, it is quantized, whereas in the unbound state the energy is continuously variable.
Hence, if the rest energy has a characteristic value and internal energy states are lacking, we have an elementary particle (a lepton or a quark). If there are internal discrete energy states we have a composite character, whereas we have an aggregate if the internal energy is continuously variable.
With aggregates it is easier to abstract from specific properties than in the case of the characters of composite systems discussed in section 5.3. Studying the properties of macroscopic physical bodies, thermodynamics starts from four general laws, for historical reasons numbered 0 to 3 and written with capitals.
The Zeroth Law states that two or more bodies (or parts of a single body) can be in mutual equilibrium. Now the temperature of the interacting bodies is the same, and in a body as a whole the temperature is uniform. Depending on the nature of the interaction, this applies to other intensive magnitudes as well, for instance the pressure of a gas, or the electric or chemical potential. In this context bodies are not necessarily spatially separated. The thermodynamic laws apply to the components of a mixture as well. Equilibrium is an equivalence relation (2.1). An intensive magnitude like temperature is an equilibrium parameter, to be distinguished from an extensive magnitude like energy, which is additive. If two unequal bodies are in thermal equilibrium with each other, their temperature is the same, but their energy is different and the total energy is the sum of the energies of the two bodies apart. An additive magnitude refers to the quantitative relation frame, whereas an equilibrium parameter is a projection on the spatial frame.
According to the First Law of thermodynamics, the total energy is constant, if the interacting bodies are isolated from the rest of the world. The thermodynamic law of conservation of energy forbids all processes in which energy would be created or annihilated. The First Law does not follow from the fact that energy is additional. Volume, entropy, and the mass of each chemical component are additive as well, but not always constant in an interaction.
The Second Law states that interacting systems proceed towards an equilibrium state. The entropy decreases if a body loses energy and increases if a body gains energy, but always in such a way that the total entropy increases as long as equilibrium is not reached. Based on this law only entropy differences can be calculated.
According to the Third Law the absolute zero of temperature cannot be reached. At this temperature all systems would have the same entropy, to be considered the zero point on the entropy scale.
From these axioms other laws are derivable, such as Joshua Gibbs’s phase rule (see below). As long as the interacting systems are not in equilibrium, the gradient of each equilibrium parameter acts as the driving force for the corresponding current causing equilibrium. A temperature gradient drives a heat current, a potential difference drives an electric current, and a chemical potential difference drives a material current. Any current (except a superconducting flow) creates entropy.
The thermodynamic axioms describe the natural laws correctly in the case of interacting systems being close to equilibrium. Otherwise, the currents are turbulent and a concept like entropy cannot be defined. Another restriction follows from the individuality of the particles composing the system. In the equilibrium state, the entropy is not exactly constant, but it fluctuates spontaneously around the equilibrium value. Quantum physics shows energy to be subject to Werner Heisenberg’s relations (4.3). In fact, the classical thermodynamic axioms refer to a continuum, not to the actually coarse matter. Thermodynamics is a general theory of matter, whereas statistical physics studies matter starting from the specific properties of the particles composing a system. This means that thermodynamics and statistical physics complement each other.
An equilibrium state is sometimes called an ‘attractor’, attracting a system from any instable state toward a stable state. Occasionally, a system has several attractors, now called local equilibrium states. If there is a strong energy barrier between the local equilibrium states, it is accidental which state is realized. By an external influence, a sudden and apparently drastic transition may occur from one attractor to another one. In quantum physics a similar phenomenon is called ‘tunneling’, to which I shall return in section 5.6.
a. A homogeneous set of particles having the same character may be considered a quantitatively founded aggregate, if the set does not constitute a structural whole with a spatially founded character of its own (like the electrons in an atom). In a gas the particles are not bound to each other. Usually, an external force or a container is needed to keep the particles together. In a fluid, the surface tension is a connective force that does not give rise to a characteristic whole. The composing particles’ structural similarity is a condition for the applicability of statistics. Therefore I call a homogeneous aggregate quantitatively founded.
It is not sufficient to know that the particles are structurally similar. At least it should be specified whether the particles are fermions or bosons (4.4). Consider, for instance, liquid helium, having two varieties. In the most common isotope, a helium nucleus is composed of two protons and two neutrons. The net spin is zero, hence the nucleus is a boson. In a less common isotope, the helium nucleus has only one neutron besides two protons. Now the nucleus’ net spin is ½ and it is a fermion. This distinction (having no chemical consequences) accounts for the strongly diverging physical properties of the two fluids.
Each homogeneous gas is subjected to a specific law, called the statistics or distribution function. It determines how the particles are distributed over the available states, taking into account parameters like volume, temperature, and total energy. The distribution function does not specify which states are available. Before the statistics is applicable, the energy of each state must be calculated separately.
The Fermi-Dirac statistics based on Wolfgang Pauli’s exclusion principle applies to all homogeneous aggregates of fermions, i.e., particles having half-integral spin. For field particles and other particles having an integral spin, the Bose-Einstein statistics applies, without an exclusion principle. If the mean occupation number of available energy states is low, both statistics may be approximated by the classical Maxwell-Boltzmann distribution function. Except at very low temperatures, this applies to every dilute gas consisting of similar atoms or molecules. The law of Robert Boyle and Louis Gay-Lussac follows from this statistics. It determines the relation between volume, pressure and temperature for a dilute gas, if the interaction between the molecules is restricted to elastic collisions and if the molecular dimensions are negligible. Without these two restrictions, the state equation of Johannes Van der Waals counts as a good approximation. Contrary to the law of Boyle and Gay-Lussac, Van der Waals’ equation contains two constants characteristic for the gas concerned. It describes the condensation of a gas to a fluid as well as the phenomena occurring at the critical point, the highest temperature at which the substance is liquid.
b. It is not possible to apply statistics directly to a mixture of subjects having different characters. Sometimes, it can be done with respect to the components of a mixture apart. For a mixture of gases like air, the pressure exerted by the mixture equals the sum of the partial pressures exerted by each component apart in the same volume at the same temperature (John Dalton’s law). The chemical potential is a parameter distinguishing the components of a heterogeneous mixture.
I consider a heterogeneous mixture like a solution to have a spatial foundation, because the solvent is the physical environment of the dissolved substance. Solubility is a characteristic disposition of a substance dependent on the character of the solvent as the potential environment.
Stable characters in one environment may be unstable in another one. When solved in water, common salt molecules fall apart into sodium and chlorine ions. In the environment of water, the dielectric constant is much higher than in air. Now Charles Coulomb’s force between the ions is proportionally smaller, too small to keep the ions together. (A more detailed explanation depends on the property of a water molecule to have a permanent electric dipole moment (5.3). Each sodium or chlorine ion is surrounded by a number of water molecules, decreasing their net electric charge. This causes the binding energy to be less than the mean kinetic energy of the molecules.)
The composition of a mixture, the number of grams of solved substance in one litre water, is accidental. It is not determined by any character but by its history. This does not mean that two substances can be mixed in any proportion whatsoever. However, within certain limits dependent on the temperature and the characters of the substances concerned, the proportion is almost continuously variable.
c. Even if a system only consists of particles of the same character, it may not appear homogeneous. It exists in two or more different ‘phases’ simultaneously, for example, the solid, liquid, and vaporous states. A glass of water with melting ice is in internal equilibrium at 0 °C. If heat is supplied, the temperature remains the same until all ice is melted. Only chemically pure substances have a characteristic melting point. In contrast, a heterogeneous mixture has a melting trajectory, meaning that during the melting process, the temperature increases. A similar characteristic transition temperature applies to other phase transitions in a homogeneous substance, like vaporizing, the transition from a paramagnetic to a ferromagnetic state, or the transition from a normal to a superconducting state. Addition of heat or change of external pressure shifts the equilibrium. A condition for equilibrium is that the particles concerned move continuously from one phase to the other. Therefore I call it a homogeneous kinetically founded aggregate.
An important example of a heterogeneous kinetic equilibrium concerns chemical reactions. Water consists mostly of water molecules, but a small part (10-7 at 25oC) is dissociated into positive H-ions and negative OH-ions. In the equilibrium state, equal amounts of molecules are dissociated and associated. By adding other substances (acids or bases), the equilibrium is shifted.
Both phase transitions and chemical reactions are subject to characteristic laws and to general thermodynamic laws, for instance Joshua Gibbs’s phase rule.[22]
Encyclopaedia of relations and characters. 5. Physical characters
5.6. Coming into being, change and decay
I call an event physically qualified if it is primarily characterized by an interaction between two or more subjects. A process is a characteristic set of events, partly simultaneously, partly successively. Therefore, physically qualified events and processes often occur in an aggregate, sometimes under strictly determined circumstances, among which the temperature. In a mixture, physical, chemical, and astrophysical reactions lead to the realization of characters. Whereas in physical things properties like stability and life time are most relevant, physical and chemical processes concern the coming into being, change, and decay of those things.
In each characteristic event a thing changes of character (it emerges or decays) or of state (preserving its identity). With respect to the thing’s character considered as a law, the first case concerns a subjective event (because the subject changes). The second case concerns an objective event (for the objective state changes). Both have secondary characteristics. I shall briefly mention some examples.
Annihilation or creation of particles is a subjective numerically founded event. Like any other event, it is subject to conservation laws. An electron and a positron emerge simultaneously from the collision of a γ-particle with some other particle, if the photon’s energy is at least twice the electron’s rest energy. The presence of another particle, like an atomic nucleus, is required in order to satisfy the law of conservation of linear momentum. For the same reason, at least two photons emerge when an electron and a positron destroy each other.
By emitting or absorbing a photon, a nucleus, atom or molecule changes its state. This is a spatially founded objective transformation. In contrast, in a nuclear or chemical reaction one or more characters are transformed, constituting a subjective spatially founded event. In alpha- or beta-radioactivity, a nucleus changes subjectively its character, in gamma-activity it only changes objectively its state.
An elastic collision is an event in which the kinetic state of a particle is changed without consequences for its character or its internal state. Hence, this concerns an objective kinetically founded event. In a non-elastic collision a subjective change of character or an objective change of state occurs. Quantum physics describes such events with the help of operators determining the transition probability.
A process is an aggregate of events. In a homogeneous aggregate, phase transitions may occur. In a heterogeneous aggregate chemical reactions occur (5.5). Both are kinetically founded. This also applies to transport phenomena like electric, thermal or material currents, thermo-electric phenomena, osmosis and diffusion.
Conservation laws are ‘constraints’ restricting the possibility of processes. For instance, a process in which the total electric charge would change is impossible. In atomic and nuclear physics, transitions are known to be forbidden or improbable because of selection rules for quantum numbers characterizing the states concerned.
Physicists and chemists take for granted that each process that is not forbidden is possible and therefore experimentally realizable. In fact, several laws of conservation like those of lepton number and baryon number were discovered because certain reactions turned out to be impossible. Conversely, in 1930 Wolfgang Pauli postulated the existence of neutrinos, because otherwise the laws of conservation of energy and momentum would not apply to beta-radioactivity. Experimentally, the existence of neutrinos was not confirmed until 1956.
In common parlance, a collision is a rather dramatic event, but in physics and chemistry a collision is just an interaction between two or more subjects moving towards each other, starting from a large distance, where their interaction is negligible. In classical mechanics, this interaction means an attractive or repelling force. In modern physics, it implies the exchange of real or virtual particles like photons.
In each collision, at least the state of motion of the interacting particles changes. If that is all, we speak of an elastic collision, in which only the distribution of kinetic energy, linear and angular momentum over the colliding particles changes. A photon can collide elastically with an electron (Arthur Compton’s effect), but an electron cannot absorb a photon. Only a composite thing like a nucleus or an atom is able to absorb a particle.
Collisions are used to investigate the character of the particles concerned. A famous example is the scattering of alpha-particles (helium nuclei) by gold atoms (1911). For the physical process, it is sufficient to assume that the particles have mass and charge and are point-like. It does not matter whether the particles are positively or negatively charged. The character of this collision is statistically expressed in a mathematical formula derived by Ernest Rutherford. The fact that the experimental results (by Hans Geiger and Ernest Marsden) agreed with the formula indicated that the nucleus is much smaller than the atom, and that the mass of the atom is almost completely concentrated in the nucleus. A slight deviation between the experimental results and the theoretical formula allowed of an estimate of the size of the nucleus, its diameter being about 104 times smaller than the atom’s. The dimension of a microscopic invisible particle is calculable from similar collision processes, and is therefore called its collision diameter. Its value depends on the projectiles used. The collision diameter of a proton differs if determined from collisions with electrons or neutrons.
In a non-elastic collision the internal structure of one or more colliding subjects changes in some respect. With billiard balls only the temperature increases, kinetic energy being transformed into heat, causing the motion to decelerate.
In a non-elastic collision between atoms or molecules, the state of at least one of them changes into an excited state, sooner or later followed by the emission of a photon. This is an objective characteristic process.
The character of the colliding subjects may change subjectively as well, for instance, if an atom loses an electron and becomes an ion, or if a molecule is dissociated or associated.
Collisions as a means to investigate the characters of subatomic particles have become a sophisticated art in high-energy physics.
Spontaneous decay became first known at the end of the nineteenth century from radioactive processes. It involves strong, weak or electromagnetic interactions, respectively in α-, β-, and γ-radiation. The decay law of Ernest Rutherford and Frederick Soddy (1902) approximately represents the character of a single radioactive process. This statistical law is only explainable by assuming that each atom decays independently of all other atoms. It is a random process. Besides, radioactivity is almost independent of circumstances like temperature, pressure and the chemical compound in which the radioactive atom is bound. Such decay processes occur in nuclei and sub-atomic particles, as well as in atoms and molecules being in a metastable state. The decay time is the mean duration of existence of the system or the state.
Besides spontaneous ones, stimulated transformations occur. Albert Einstein first investigated this phenomenon in 1916, with respect to transitions between two energy levels of an atom or molecule, emitting or absorbing a photon. He found that (stimulated) absorption and stimulated emission are equally probable, whereas spontaneous emission has a different probability.[23] In stimulated emission, an incoming photon causes the emission of another photon such that there are two photons after the event, mutually coherent, i.e., having the same phase and frequency. Stimulated emission plays an important part in lasers and masers, in which coherent light respectively microwave radiation is produced. Absorption is always stimulated.
Stimulated emission is symmetrical with stimulated absorption, but spontaneous emission is asymmetric and irreversible.
A stable system or a stable state may be separated from other systems or states by an energy barrier. It may be imagined that a particle is confined in an energy well, for instance an alpha-particle in a nucleus. According to classical mechanics, such a barrier is insurmountable if it has a larger value than the kinetic energy of the particle in the well, but quantum physics proves that there is some probability that the particle leaves the well. This is called ‘tunneling’, for it looks like the particle digging a tunnel through the energy mountain.
Consider a chemical reaction in which two molecules A and B associate to AB and conversely, AB dissociates into A and B. The energy of AB is lower than the energy of A+B apart, the difference being the binding energy. A barrier called the activation energy separates the two states. In an equilibrium situation, the binding energy and the temperature determine the proportion of the numbers of molecules (NA.NB/NAB). It is independent of the activation energy. At a low temperature, if the total number of A’s equals the total number of B’s, only molecules AB will be present. In an equilibrium situation at increasing temperatures, the number of molecules A and B increases, and that of AB decreases. In contrast, the speed of the reaction depends on the activation energy (and again on temperature). Whereas the binding energy is a characteristic magnitude for AB, the activation energy partly depends on the environment. In particular the presence of a catalyst may lower the activation energy and stimulate tunneling, increasing the speed of the reaction.
The possibility to overcome energy barriers explains the possibility of transitions from one stable system to another one. It is the basis of theories about radioactivity and other spontaneous transitions, chemical reaction kinetics, the emergence of chemical elements and of phase transitions, without affecting theories explaining the existence of stable or quasi-stable systems.
In such transition processes the characters do not change, but a system may change of character. The laws do not change, but their subjects do.
The chemical elements have arisen in a chain of nuclear processes, to be distinguished as fusion and fission. The chain starts with the fusion of hydrogen nuclei (protons) into helium nuclei, which are so stable that in many stars the next steps do not occur. Further processes lead to the formation of all known natural isotopes up to uranium. Besides helium with 4 nucleons, beryllium (8), carbon (12), oxygen (16), and iron (56) are relatively stable. In all these cases, both the number of protons and the number of neutrons is even.
The elements only arise in specific circumstances. In particular, the temperature and the density are relevant. The transition from hydrogen to helium occurs at 10 to 15 million Kelvin and at a density of 0.1 kg/cm3. The transition of helium into carbon, oxygen and neon occurs at 100 to 300 million Kelvin and 100 kg/cm3.[24] Only after a considerable cooling down, these nuclei form with electrons the atoms and molecules to be found on the earth.
Once upon a time the chemical elements were absent. This does not mean that the laws determining the existence of the elements did not apply. The laws constituting the characters of stable and metastable isotopes are universally valid, independent of time and place. But the realization of the characters into actual individual nuclei does not depend on the characters only, but on circumstances like temperature as well. On the other hand, the available subjects and their relations determine these circumstances. Like initial and boundary conditions, characters are conditions for the existence of individual nuclei. Mutatis mutandis, this applies to electrons, atoms and molecules as well.
In the preceding chapters, I discussed quantitative, spatial, and kinetic characters. About the corresponding subjects, like groups of numbers, spatial figures, or wave packets, it cannot be said that they come into being or decay, except in relation to physical subjects. Only interacting things emerge and disappear. Therefore there is no quantitative, spatial or kinetic evolution comparable to the astrophysical one, even if the latter is expressed in numerical proportions, spatial relations, and characteristic rhythms.
Although stars have a lifetime far exceeding the human scale, it is difficult to consider them stable. Each star is a reactor in which continuously processes take place. Stars are subject to evolution. There are young and old stars, each with their own character. Novae and supernovae, neutron stars, and pulsars represent various phases in the evolution of a star. The simplest stellar object may be the black hole, behaving like a thermodynamic black body subject to the laws of thermodynamics.[25]
These processes play a part in the theory about the astrophysical evolution, strongly connected to the standard model (5.1). It correctly explains the relative abundance of the chemical elements.[26] After the start of the development of the physical cosmos, about thirteen billion years ago, it has expanded. As a result all galaxies move away from each other, the larger the distance, the higher their speed. Because light needs time to travel, the picture we get from galaxies far away concerns states from era’s long past. The most remote systems are at the spatio-temporal horizon of the physical cosmos. In this case, astronomers observe events that occurred shortly after the big bang, the start of the astrophysical evolution.
Its real start remains forever behind the horizon of our experience. Astrophysicists are aware that their theories based on observations may approach the big bang without ever reaching it. The astrophysical theory describes what has happened since the beginning - not the start itself - according to laws discovered in our era. The extrapolation towards the past is based on the supposition that these laws are universally valid and constant. This agrees with the realistic view that the cosmos can only be investigated from within. It is not uncommon to consider our universe as one realized possibility taken from an ensemble of possible worlds.[27] However, there is no way to investigate these alternative worlds empirically.
[1] Lucas 1973, 43-56.
[2] Omnès 1994, 193-198, 315-319.
[3] Dijksterhuis 1950; Reichenbach 1956; Gold (ed.) 1967; Grünbaum 1973; 1974; Sklar 1974, chapter V; Sklar 1993; Prigogine 1980; Coveney, Highfield 1990; Stafleu 2019, chapter 9.
[4] Reichenbach 1956, 135.
[5] Reichenbach 1956, 115.
[6] Putnam 1975, 88.
[7] von Laue 1949; Jammer 1961; Elkana 1974a; Harman 1982.
[8] Jammer 1957; Cohen, Smith (eds.) 2002.
[9] Morse 1964, 53-58; Callen 1960, 79-81; Stafleu 1980, 70-73.
[10] Callen 1960, 293-308.
[11] Morse 1964, 106-118; Callen 1960, 288-292; Prigogine 1980, 84-88.
[12] Sklar 1993, chapters 5-7.
[13] McCormmach 1970a; Kragh 1999, chapter 8.
[14] Jammer 1961, chapter 11.
[15] Pickering 1984, chapter 9-11; Pais 1986, 603-611.
[16] See Walker, Slack 1970, who do not mention Faraday’s ion.
[17] See Millikan 1917; Anderson 1964; Thomson 1964; Pais 1986; Galison 1987; Kragh 1990; 1999.
[18] Pais 1986, 466; Pickering 1984, 67.
[19]Weisskopf 1972, 41-51.
[20] See Barrow, Tipler 1986, 5, 252-254.
[21]Cat 1998, 288.
[22] Callen 1960, 206-207.
[23] Einstein 1916.
[24] Mason 1991, 50.
[25] Hawking 1988, chapter 6, 7.
[26] Mason 1991, chapter 4.
[27] Barrow, Tipler 1986, 6-9.
Encyclopaedia of relations and characters,
their evolution and history
Chapter 6
Organic characters
6.1. The biotic relation frame
6.2. The organization of biochemical processes
6.3. The character of biotic processes
6.4. The secondary characteristic of organisms
6.5. Populations
6.6. The gene pool
6.7. Does a species correspond with a character?
Encyclopaedia of relations and characters. 6. Organic characters
6.1. The biotic relation frame
No doubt, 1859 was the birth year of modern biology. Charles Darwin and Alfred Wallace were neither the first nor the only evolutionists, and their path was paved by geologists in the preceding century establishing that the earth is much older than was previously perceived, and that many animals and plants living in prehistoric times are now extinct.[1] The publication of Darwin’s On the origin of species by means of natural selection drew much attention, criticism as well as approval. In contrast, Gregor Mendel’s discovery in 1865 of the laws called after him, which would become the basis of genetics, was ignored for 35 years. The synthesis of Darwin’s idea of natural selection with genetics, microbiology, and molecular biology (circa 1930) constitutes the foundation of modern biology.
This chapter applies the relational character theory, introduced in chapter 1, to living beings and life processes. The genetic relation, leading to renewal and ageing, is the primary characteristic of living subjects (6.1). I investigate successively the characters of organized and of biotic processes (6.2, 6.3), of individual organisms (6.4) and of populations and their dynamic evolution (6.5, 6.6). For the time being, I shall take for granted that a species corresponds to a character. Section 6.7 deals with the question of whether this assumption is warranted.
Life presupposes the existence of inorganic matter, including the characters typified by the relation frames of number, space, motion, and interaction. Organisms do not consist of other atoms than those occurring in the periodic system of chemical elements. All physical and chemical laws are unrestrictedly valid for living beings and life processes, taking into account that an organism is not a physically or chemically closed system.
Both in living organisms and in laboratory situations, the existence of organized and controlled chemical processes indicates that biotic processes are not completely reducible to physical and chemical ones. In particular, the genetic laws for reproduction make no sense in a physical or chemical context. Rather, they transcend the physical and chemical laws without denying these:
‘Except for the twilight zone of the origin of life, the possession of a genetic program provides for an absolute difference between organisms and inanimate matter.’[2] ‘… the existence of a genetic program … constitutes the most fundamental difference between living organisms and the world of inanimate objects, and there is no biological phenomenon in which the genetic program is not involved …’[3] ‘Everything in a living being is centered on reproduction’.[4] ‘… “life” is not so much defined by certain single characters but by their combination into individualized, purposefully functioning systems showing a specific activity, limited to a certain life span, but capable of reproduction, and undergoing gradually hereditary alterations over long periods.’[5]
For the biotic relation frame the genetic law is appropriate. Each living organism descends from another one, and all living organisms are genetically related. This also applies to cells, tissues, and organs of a multicellular plant or animal. Its descent determines the function of a cell, a tissue, or an organ in an organism, as well as the position of an organism in taxonomy. The genetic law constitutes the universal relation frame for all living beings. Empirically, it is amply confirmed, and it is the starting point of major branches of biological research, like genetics, evolution theory, and taxonomy. However, in physical and chemical research, the genetic law only plays a part in biochemistry and biophysics.
The genetic order is more than a static relationship. It has the dynamics of innovation and ageing. Renewal is a characteristic of life, strongly related to sexual or asexual cell division, to growth and differentiation. The individual life cycle of fertilization, germination, growth, reproduction, ageing, and dying is irreversible. Rejuvenation occurs in a series from one generation to the next, and between cells in a multicellular organism. (This does not exclude neoteny and other forms of heterochrony.[6]) A population goes through periods of rise, blooming, regress, and extinction. Speciation implies innovation as well.
Each living being descends from another living being. The law statement, omne vivum e vivo, is relatively recent. Even in the nineteenth century, generatio spontanea was accepted as a possibility. Empirical and theoretical research have led to the conviction that life can only spring from life.[7] The theory of evolution does not exclude spontaneous generation entirely, for that would constitute the beginning of the biotic evolution. It might even be possible that the two kingdoms of prokaryotes arose independently. In contrast, there are good reasons to assume that eukaryotic cells have evolved from the prokaryotes, and multicellular plants, fungi, and animals from unicellular eukaryotes.
Most biologists accept a stronger law than omne vivum e vivo. It states that all living beings are genetically related, having a common ancestry. This law, to be called the genetic law, is hard to prove. Paleontological research alone does not suffice to demonstrate that all organisms have the same ancestors,[8] but it achieves support from other quarters. The argument that all living beings depend on the same set of four or five nucleic acids and twenty amino acids is not strong. Perhaps no other building blocks are available. But in eukaryotes these molecules only occur in the laevo variant, excluding the mirror-symmetric dextro variant. These two are energetically equivalent, and chemical reactions (as far as applicable) always produce molecules of the two variants in equal quantities. In the production of amino acids, similar DNA and RNA molecules are involved. In widely differing organisms, many other processes proceed identically: ‘… from the bacterium to man the chemical machinery is essentially the same … First, in its structure: all living beings … are made up of … proteins and nucleic acids … constituted by the assembling of the same residues … Second, in its functioning: the same reactions, or rather sequences of reactions, are used in all organisms for the essential chemical operations …’[9] Moreover, all plants, animals, and fungi consist of cells, although there are large differences between prokaryotic and eukaryotic cells, as well as between plant and animal cells. Prokaryotic cells are more primitive and much smaller than eukaryotic cells, and the cell wall is in plants thicker and more rigid than in animals.
The fundamental laws of the universal relation frames cannot be logically derived from empirical evidence, even if this is abundantly available. The laws of thermodynamics, the mechanical conservation laws, and the law of inertia are no more provable than the genetic law. Such fundamental laws function as axioms in a theory, providing the framework for scientific research of characters. In this sense, the genetic law has proved to be as fruitful as the generally valid physical and kinetic laws. This does not mean that such laws are not debatable, or void of empirical content. On the contrary, the law of inertia was accepted in the seventeenth century only after a long struggle with the Aristotelian philosophy of nature, from which science had to be emancipated. The law of conservation of energy and the Second Law of thermodynamics were accepted only about 1850. Similarly, only in the twentieth century the genetic law was recognized after laborious investigations. In all these cases, empirically sustained arguments ultimately turned the scale.
With respect to the biotic relation frame, the theory of evolution is as general as thermodynamics is with respect to physical and chemical relations. Both theories concern aggregates, but they are nevertheless indispensable for understanding the characters of individual things and processes. The main axioms of evolution theory are the genetic law and laws for natural selection with respect to populations.[10] In general terms, the theory of evolution explains why certain species can maintain themselves in their environment and others cannot, pointing out the appropriate conditions. In specific cases, the evolution theory needs additional data and characteristic laws, in order to explain why a certain species is viable in certain circumstances. Also in this respect, evolution theory is comparable to thermodynamics. At the end of the nineteenth century, energeticists like Friedrich Ostwald assumed that thermodynamics should be able to explain all physical and chemical processes. Atomic theory and quantum physics made clear that thermodynamics is too general for that. Likewise, in my view, evolution theory is not specific enough to explain biotic characters.
The genetic law lies at the basis of biological taxonomy. Like plants and fungi, as well as protists and prokaryotes, animals are subject to biotic laws, but I shall assume that they are primarily characterized by another relation frame, to be called psychical (chapter 7). Within their generic psychic character, a specific organic character is interlaced. Genetic relations primarily characterize all other living beings and life processes. Each biotic process is involved with replication (6.3), and the nature of each living being is genetically determined (6.4). Within an organism, physical and chemical processes have the tertiary disposition to function in biotic processes (6.2). Living beings support symbiotic relations leading to evolution (6.5).
The genetic law is a leading principle of explanation for taxonomy and the modern species concept. The universal relation frames allow us of identifying any thing or event, to establish its existence and change, and to find its temporal relations to other things and events. In principle, the genetic law allows of the possibility to order all organisms into a biological taxonomy. The empirical taxonomy does not originate from human thought but from living nature. Its leading principle is not logical but biological. A logical, i.e., deductive classification is based on a division of sets into subsets, considering similarities and differences. It descends logically from general (the kingdoms and phyla) to specific (the species). In contrast, the biological ordering depends on genetic descent, ascending inductively from species to the higher categories.
Genetic relations can be projected on the relation frames preceding the biological one. On the different levels of taxonomy, a species, and a multicellular organism, these mappings can be distinguished as follows.
a. A lineage is a serial projection of the genetic order on the quantitative relation frame. Within a species one finds the linear relation of parent to offspring. Within a multicellular organism the serial order concerns each line of replicating cells. ‘Biological entity’, ‘parent of’, and ‘ancestor’ are primitive, undefinable concepts in the following two axioms: ‘No biological entity is a parent of itself. If a is an ancestor of b, then b is not an ancestor of a.’[11] However, if the mentioned terms are undefined, the natural numbers satisfy these axioms as well (2.1). By counting the intermediary specimens, it is possible to establish the genetic relation between two individuals, organs, or cells, that are serially connected.
b. Parallel lineages are mutually connected by common ancestry. Therefore species, organs, or cells having no serial relation may be related by kinship, the genetic relation between siblings, cousins, etc. Kinship of parallel lineages is to be considered a spatial expression of the genetic relation. Each branching means a new species, a new individual, a new organ, or a new cell. In taxonomy, biologists establish kinship between species on the basis of similarities and differences. These concern shape (morphology), way of life (physiology), development of an organism (in particular embryology), the manner of reproduction, and nowadays especially comparing DNA, RNA, or the proteins they produce.[12] Kindred lineages are connected in a cladogram, a diagram showing the degree of kinship between species. If an organism has several descendants, the lineage branches within a species. In sexual reproduction lineages are connected and each organism has two parents, four grandparents, etc. Within an organism cell division causes branching. In a plant, fungus, or animal, recently branched cells lie close to each other. The larger the distance between two cells, the smaller is their kinship.
c. Genetic development may be considered the kinetic projection of the order of renewal and ageing. Temporal relations are recognizable in the generation difference as a biotic measure mapped on kinetic time. It is the time between two successive bifurcations of a species, between the germination of a plant and that of its seeds, or between two successive cell divisions. If timing is taken into account, a cladogram becomes a phylogenetic tree. Between two splits a population evolves. From germination to death an organism develops, and cells differentiate and integrate into tissues and organs.
d. The dynamic force of evolution within a species and the splitting of species consist of competition and natural selection. These may be considered projections of the genetic relation on the physical. Between plants, the competition concerns physical and chemical resources for existence, between fungi and animals organic ones as well. Competition is a repulsive force, to use a physical term. Besides natural selection, accidental processes lead to genetic changes, mostly in small isolated populations. This phenomenon is called ‘random genetic drift’ or ‘inbreeding’ in common parlance. Breeders use it to achieve desirable plant or cattle variations. There are attractive forces as well. Only within a species, sexual reproduction is the most innovative form of replication. Sexual interaction may be considered a specific physical expression of the genetic relation. Within an organism, neighbouring cells influence each other during their differentiation and integration.
These projections give rise to four types, each of organized chemical processes (6.2), biotic processes (6.3), biotically qualified thing-like characters (6.4), and their aggregates (6.5, 6.6).
Encyclopaedia of relations and characters. 6. Organic characters
6.2. The organization
of biochemical processes
In each living being, many organized biochemical processes take place, having a function in the life of a cell, a tissue, an organ, or an organism. The term organism for an individual living being points to its character as an organized and organizing unit. The organism has a temporal existence. It emerges when the plant germinates, it increases in largeness and complexity during its development, it ages, and after its death it falls apart.
An organized unit is not necessarily a living being. A machine does not live, but it is an organized whole, made after a design. A machine does not reproduce itself and is not genetically related to other machines. Because human persons design a machine, its design cannot be found in the machine itself. In a living organism, the natural design is laid down in the genome, the ordered set of genes based on one of more DNA molecules. ‘If you find something, anywhere in the universe, whose structure is complex and gives the strong appearance of having been designed for a purpose, then that something either is alive, or was once alive, or is an artefact created by something alive.’[13] ‘Entities have functions when they are designed to do something, and their function is what they are designed to do. Design can stem from the intentions of a cognitive agent or from the operation of selection …’[14]
The organism transfers the design from cell to cell and from generation to generation. The natural design changes because of mutation at the level of a single cell, because of sexual interaction at the level of organisms, or caused by natural selection at the level of a population. It is bound to general and specific laws determining the conditions under which the design is executable or viable. A design is the objective prescription for a biotic character. It is a chemical character having a tertiary biotic characteristic.
The processes to be discussed in the present section are primarily physically qualified, and some of them can be organized in a laboratory or factory. Their disposition to have a function in biotic processes is a tertiary characteristic (6.3).
a. Molecules are assembled according to a design. Although the concept of a lineage points to a relation between living beings, there is an analogy on the molecular level. This refers to the assemblage of molecules according to a genetic design as laid down in the DNA molecules. The DNA composition is partly species specific, partly it is unique for each individual living being.
The natural design for an organism is laid down in its genome, the genetic constellation of the genes in a specific sequence. The DNA molecules are the objective bearers of the genetic design, which is the genotype determining the phenotype, that is the appearance of a living being. Each organism has its own genome, being the objective expression of the species to which it belongs as well as of its individuality. Like the DNA molecules, the genome is mostly species specific.
A DNA molecule consists of a characteristic sequence of bases (nucleotides) of nucleic acids indicated by the letters A (adenine), C (cytosine), G (guanine) and T (thymine). In RNA, uracil (U) replaces thymine. The production of uracil costs less energy than that of thymine, which is more stable.[15] Stability is for DNA more important than for RNA that is assembled repeatedly. Hence, mistakes in the transfer of design are easy to correct. The double helix structure enhances the stability of DNA as well. RNA consists of only one series of nucleotides. DNA is the start of the assembly lines of the molecules having a function in life processes. Three nucleotides form the design for one of the twenty possible amino acids. An RNA molecule is a replica of the part of the DNA molecule corresponding to a single gene. Mediated by an RNA molecule each gene designs a polypeptide or protein consisting of a characteristic sequence of amino acids. A protein is a large polypeptide. Sometimes the same gene assembles more than one protein. Often a gene occurs more than once in the DNA, its locus determining how the gene co-operates with other genes. Hence, similar genes may have different functions. A direct relation between a gene and a phenotypic characteristic is rare.[16] Some proteins are enzymes acting as catalysts in these and other material transformations.
According to the ‘central dogma of molecular biology’, formulated by Francis Crick, the transfer of information from DNA via RNA (‘transcription’ by mRNA) to the polypeptides (‘translation’ by tRNA) is irreversible. With respect to the first step, the dogma does not apply entirely to viruses, and there are important differences between prokaryotes and eukaryotes. The intervention of RNA is necessary in eukaryotes, because DNA is positioned in the cell nucleus, whereas the assembly of the polypeptides occurs elsewhere (in ribosomes). In prokaryotes, the translation may start before the transcription is finished. In transcription, a third form is produced, rRNA, concentrated in the ribosome, the organelle where the assembly of polypeptides takes place. Because RNA has mostly a transport function, its tertiary characteristic may be called kinetic.
Although a great deal of the assembly of molecules takes place according to a genetically determined pattern, interaction with the surroundings takes place as well. The environment is first of all the cell plasma, the more or less independent specialized organelles and the cell wall, in which many specific biochemical processes occur. Second, via the cell wall a cell is in contact with the physical and chemical environment. Third, in multicellular organisms the environment includes other cells in the immediate neighbourhood. Finally, only animal cells exert some kind of action at a distance (7.1).
To a large extent, the environment determines which genes or combinations of genes are active, being selectively switched on or off. The activity of genes in a multicellular organism depends on the phase of development. The genome acts in the germination phase differently than in an adult plant, in a root otherwise than in a flower. The genetic constellation determines the growth of an organism. Conversely, the genetic action depends on the development of the plant and the differentiation of its cells. Epigenesis is the name of the process, in which each phase in the development of a plant or animal is determined by preceding phases, genes and environment.[17]
Therefore, DNA is not comparable to a code, a blueprint, a map or diagram in which the phenotype is represented on a small scale. Rather, it is an extensive prescription, a detailed set of instructions for biochemical processes.[18] The conception of the composition of DNA as a code is a metaphor, inspired by the discovery that the structure of DNA can be written in only four symbols.
The enormous variation of molecules is possible because of the equality of the atoms and the uniformity of chemical bonding. This is comparable with the construction of machines. It is easy to vary machines if and as far as the parts are standardized and hence exchangeable. This applies to the disparity of organisms as well. The organization of a plant or an animal consists partly of standardized modules, some of which are homologous in widely different organisms. Such modules exist on the level of molecules (there are only twenty different amino acids, with an enormous variation in combinations), genes (standardized combinations of genes), cells (the number of cell types is restricted to several hundreds), tissues and organs. For evolutionary innovations, the existence of exchangeable parts having a different function in different combinations and circumstances is indispensable.[19] ‘If each new species required the reinvention of control elements, there would not be time enough for much evolution at all, let alone the spectacularly rapid evolution of novel features observed in the phylogenetic record. There is a kind of tinkering at work, in which the same regulatory elements are recombined into new developmental machines. Evolution requires the dissociability of developmental processes. Dissociability of processes requires the dissociability of molecular components and their reassembly.’[20]
b. The biotic functions of molecules depend on their shape. Although the macromolecules occurring in living beings have an enormous diversity, they have much in common as well. Polymers are chains of monomers connected by strong covalent bonds (6.3). Polysaccharides consist of carbohydrates (sugars); polypeptides are constructed from amino acids; and nucleic acids consist of nucleotides. The lipids (fats, oils and vitamins) constitute a fourth important group of large molecules. Lipids are not soluble in water. Lipids are not characterized by covalent bonds but by the weaker Van der Waals bonding. Phospholipids are the most important molecules in biotic membranes. In the double cell wall the molecules are at one end hydrophilic (attracting water), at the other end hydrophobic (repelling water). In the assembly of polymers, water is liberated, whereas polymers break down by hydrolysis (absorption of water).
All organisms apply the same monomers as building blocks of polymers. In contrast, the polymers, in particular the polypeptides and nucleic acids, are species specific. The twenty different amino acids can be connected to each other in each order and in large amounts. As a consequence, the diversity of proteins and their functions is enormous.
Polymers do not only differ because of their serial composition, but in particular by their spatial shape. Like all molecules, they are primarily physically qualified and secondarily spatially founded. DNA’s double helix structure plays a part in its replication in cell division. Also other macromolecules display several spatial structures simultaneously. For the functioning of a protein as an enzyme, its spatial structure is decisive.
Each biochemical process has to overcome an energy barrier (6.6). Increasing the temperature is not suitable, because it accelerates each chemical process and is therefore not selective enough. Catalysis by specialized proteins (enzymes) or RNA molecules (ribozymes) is found in all organisms. In plants, the enzyme rubisco is indispensable for photosynthesis.
The polymers have various functions in an organism, like energy storage, structural support, safety, catalysis, transport, growth, defence, control, or motion. Only nucleic acids have a function in the reproduction of cells and organisms.
c. The genetic development of a living being depends on metabolism, a transport process. A cell can only live and replicate because of a constant stream of matter and energy through various membranes. A unicellular organism has direct contact with its environment, in which it finds its food and deposits its waste. This also applies to a multicellular organism consisting of a colony of independently operating cells, like many algae. These organisms’ ideal environment is salt water, followed by fresh water and moist situations like mud or the intestines of animals. To colonial organisms, this imposes the constraint that a tissue cannot be thicker than two cells.
Multicellular fungi, plants, or animals need internal transport of food, energy, and waste, requiring cell differentiation, in which, for instance, the photosynthetic cells lie at the periphery of plants. Metabolism is an organized stream of matter through the organism. It allows of life outside water. In the atmosphere, oxygen is better accessible than in water, other materials are less accessible.
The cell wall is not merely a boundary of the cell. Nor is it a passive membrane that would transmit some kinds of matter better than others. Rather, it is actively involved in the transport of all kinds of matter from one cell to another. Membranes have an important function in the organization of biochemical processes, the assemblage of molecules, the transformation of energy, the transport of matter, the transfer of information, and the processing of signals. Hence, the presence of membranes may be considered a condition for life.
Plant cells are close together, and transport takes place directly from one cell to the other one. A plant cell has at least one intracellular cavity enclosed by a membrane. This is a vacuole, mostly filled with water, acting as a buffer storage and waste disposal. Animals have intercellular cavities between their cells. Animal cells are connected by proteins regulating the exchange of molecules and information. These proteins play an important part in the development of the embryo as well.
Passive transport is distinguished from active transport. Passive transport lacks an external source of energy and is caused by diffusion in a chemical solution or by osmosis if the solution passes a membrane. Osmosis occurs if a membrane lets pass a solvent (usually water) but not the solved matter. The solvent moves through the membrane in the direction of the highest concentration of the solved matter. This induces a pressure difference across the membrane that counteracts the transport. In equilibrium, the osmotic pressure in some desert plants can be up to a hundred times the atmospheric pressure.
Some substances pass a membrane together with proteins acting as carriers. The concentration gradient is the driving force of diffusion. The size and the electric charge of the molecules concerned and the distance to be travelled also influence the diffusion speed. In particular the distance is a constraint, such that diffusion is only significant within a cell and between two neighbouring cells. To cover larger distances other means of transport are needed.
Active transport requires a source of energy, like adenosine triphosphate (ATP). This transport is coupled to carriers and proceeds against a concentration difference like a pump. Endo- or exocytose in eukaryotic cells is the process in which the cell wall encapsulates the substance to be transported. After the capsule has passed the wall it releases the transported substance. Animal cells have receptors in their wall sensitive for specific macromolecules. Besides, animals have organs specifically designed for transport, for instance by the circulation of blood.
No organism can live without energy. Nearly all organisms derive their energy directly or indirectly from the sun, by photosynthesis. This process transforms water, carbon dioxide, and light into sugar and oxygen. This apparently simple chemical reaction is in fact a complicated and well organized process, only occurring in photosynthetic bacteria and in green plants. The product is glucose (a sugar with six carbon atoms in its molecule), yielding energy rich food for plants and all organisms that feed on plants.
The transformation of energy is a redox reaction. Some molecules oxidize by donating electrons, whereas other molecules reduce by accepting electrons. The first step is glycolysis (transformation of glucose in pyruvate), which does not require oxygen. Most organisms use oxygen for the next steps (cellular respiration). Other organisms are anaerobic, transforming energy by fermentation, which is less efficient. In the absence of oxygen, many aerobic cells switch to fermentation. Because nerve cells are unable to do so, they become easily damaged at a shortage of oxygen. Glycolysis, cellular respiration and fermentation are organized processes with many intermediate steps. The end product consists of ATP and other energy carriers that after transport cede their energy in other chemical reactions.
d. Self-replication of DNA molecules has a function in reproduction. Serving as a starting point for the assemblage of polypeptides, the DNA molecule has a specific spatial structure. It consists of a double helix of two sequences of nucleotides being each other’s complement, because each adenide (A) in one sequence connects to a thymine (T) in the other one and each cytosine (C) in one sequence to a guanine (G) in the other. If the DNA molecule consists of two such strings it is called diploid. In haploid cells, the cell nucleus contains a single string of chromosomes, in diploid cells the chromosomes are paired. Each diploid gene occurs in a pair, except the sex chromosomes, being different in males (XY), equal in females (XX). Each chromosome is a single DNA molecule and consists of a large number of genes. The position of the genes on a chromosome is of decisive significance. On each position (locus) in a chromosome pair, there is at most one pair of genes, being homozygote (equal) or heterozygote (unequal). If in different individuals different genes can occupy the same locus, these genes are called alleles. The two halves are not identical, even if they look alike. This structure makes the DNA molecule very stable. A RNA molecule, acting as an intermediary between a gene on the DNA molecule and the assemblage of a polypeptide, is haploid. Consisting of a single helix, it is less stable than DNA. DNA is not always diploid. Many fungi consist of haploid cells. Only during sexual reproduction, their sex cells are diploid.
The DNA molecule itself is not assembled by another molecule. It has a unique way of self-duplication. Preceding a cell division, the diploid helix unfolds itself and the two haploid halves separate. In sexual cell division (meiosis) the next steps differ from those in the far more frequent asexual cell division (mitosis).
Mitosis is the asexual form of reproduction for unicellular organisms. It also occurs in the growth of all multicellular organisms. After the first division of the diploid DNA molecule, each half doubles itself by separating the two sequences and connecting a new complementary base to each existing base. Hence, two new diploid DNA molecules arise, after which the cell splits as well. The daughter cells are genetically identical to the mother cell.
Meiosis, the sexual cell division, is more varied. In a much occurring variant, the DNA remains haploid after the first splitting. As a rule, after the second division four daughter cells arise, each with half the DNA. Either all four are sperm cells or one is an egg cell, whereas the other three die or become organelles in the egg cell. Only after the egg cell merges with a sperm cell of another individual, a new diploid cell arises. This cell has a different composition of DNA, hence a new identity. This is only possible if the two merging DNA halves fit to each other. In most cases this means that the individuals concerned belong to the same species. In prokaryotes meiosis is often a more complicated process than in eukaryotes.
Cell division is not restricted to DNA replication. The membranes are not formed de novo but grow from the existing ones. In particular, the cell wall of the original cell is divided among the daughter cells. Life builds on life.
Encyclopaedia of relations and characters. 6. Organic characters
6.3. The character of biotic processes
Besides the organized biochemical processes there are processes that are typically biotically qualified. In section 6.5, I shall discuss the genetic changes occurring in a population. Important processes for the dynamic development of an individual organism, to be discussed in the present section, are cell division, spatial shaping, growth, and reproduction.
The genetic identity of a living organism as a whole is determined by its genetic contents. Its heredity is expressed in the genes and their material basis, the DNA molecules. All cells of a multicellular organism have the same DNA configuration and every two living beings have different DNA molecules, except in the case of asexual reproduction. The genes organize the biochemical processes discussed in section 6.2 as well as the biotic processes to be discussed in section 6.3. The genetic identity as an organizing principle of a living being determines its temporal unity. This unity disappears when the organism dies and disintegrates.
a. Cell division is a biotically qualified process that is quantitatively founded. The cell as a subjective unit multiplies itself. Sexual cell division (preceded by sexual interaction, see below) is distinguished from the more frequent asexual cell division (6.2). In the case of an eukaryotic cell, the nucleus too is divided into two halves. The other cell bodies, the cell plasma, and the cell wall, are shared out to the daughter cells and supplied by new ones.
Many organisms reproduce asexually. The prokaryotes and protists (mostly unicellular eukaryotes) reproduce by cell division. Many plants do so by self-pollination. Now the daughter has the same genetic identity as the parent. In this respect they could be considered two spatially separated parts of the same plant. On the one hand, nothing is wrong with this view. Alaska is an integral part of the United States, though it is spatially separated from the mainland. The primary character determines the temporal unity of an individual, and in the case of a bacterium, a fungus, or a plant, this is its genetic identity. Only after a sexual reproduction the daughter plant is a real new individual, genetically different from its parents and any other individual. On the other hand, this view counters natural experience, accepting a plant as an individual only if it is coherent. Moreover, in asexual reproduction not only the spatial coherence is lost, but all kinds of biochemical and biotic interactions as well. This seems to be sufficient to suppose that asexual reproduction gives rise to a new individual.[21]
No single organism is subject to genetic change. Hardly anything can be found that is more stable than the genetic character and the identity of a living being. From its germination to its death, a plant remains identical to itself in a genetic sense. Only in sexual replication genetic change occurs. Of course, a plant is subject to other changes, both cyclic (seasonal) and during its development in its successive phases of life.
b. The genetic relation is not the only factor determining the biotic position of a living being. For each plant and every animal, its relation to the environment (the biotope or ecosystem) is a condition of life. First, the environment concerns subject-subject relations. Symbiosis should be considered a spatially founded way of living together. It is found on all levels of life. Within an eukaryotic cell symbiosis occurs between the cell nucleus and the organelles having their own DNA. In multicellular organisms cells form tissues or organs. In an ecosystem, unicellular and multicellular organisms live together, mutually dependent, competitive or parasitic.
Second, each organism has a subject-object relation to the physical and chemical surroundings of air, water, and soil. Just like the organized matter in the plant, the physical environment has a dynamic function in life processes.
Third, the character of plants anticipates the behaviour of animals and human beings. This constitutes an object-subject relation. By their specific shape, taste, colour, and flavour plants are observable and recognizable by animals as food, poison, or places suited for nesting, hunting, and hiding.
c. The dynamic development of a plant from its germination to adulthood may be considered a kinetically founded biotic process. It is accompanied by differentiation of cells and pattern formation in tissues, and by relative motion of cells in animals. The growth of a plant is strongly determined, programmed by the genome. In the cell division the DNA does not change, but the genes are differentially switched on and off. During the growth, cells differentiate into various types, influenced by neighbouring cells.[22]
There are other influences from the environment, for a plant only grows if the circumstances permit it. Most seeds never start developing, because the external factors are not favourable. Even for a developing plant, the genotype does not determine the phenotype entirely. The development of the plant occurs in phases from cell to cell, in which the phenotype of the next cell is both determined by its genotype and by the phenotype of the preceding cell and the surrounding cells, as well as by the physical, chemical, and organic environment.
The dynamic development of a plant or animal belongs to the least understood processes in biology. During the twentieth century, the attention of biologists was so much directed to evolution and natural selection, that the investigation of individual development processes (in which natural selection does not play a part) receded to the background. The complexity of these processes yields an alternative or additional explanation for the fact that relatively little is known about them. Some creationists take for granted ‘natural’ processes in the development of a human being from its conception, during and after pregnancy, while considering similar processes incomprehensible in evolution. A standard objection is that one cannot understand how by natural selection such a complicated organ like the human eye could evolve even in five hundred million years. However, who can explain the development of the human eyesight in nine months, starting from a single fertilized cell? In both cases, biologists have a broad understanding of the process, without being able to explain all details. (I am not suggesting here that the evolution and the development of the visual faculty are analogous processes.) It starts from a single point, fertilization, and expands into a series of parallel but related pathways. Sometimes one pathway may be changed without affecting others, leading to a developmental dissociation. Usually such dissociation is lethal, but if it is viable, it may serve as a starting point for evolutionary renewal.[23]
d. Sexual reproduction may be considered a primarily biotically qualified process that is secondarily physically founded, like a biotic interaction. Two genetically different cells unite, and the DNA splits before forming a new combination of genes (6.2). By sexual reproduction a new individual comes into being, with a new genetic identity.
Contrary to the growth of a plant, reproduction is to a large extent accidental. Which cells unite sexually is mostly incidental. Usually only sex cells from plants of the same species may pair, although hybridization occurs frequently in the plant kingdom. By their mating behaviour, animals sometimes limit accidents, increasing their chances. Even if a viable combination is available, the probability is small that the seed germinates, reaches adulthood and becomes a fruit bearing plant. Because the ultimate chance of success is small, a plant produces during its life an enormous amount of gametes. On the average and in equilibrium circumstances, only one fruit bearing descendant survives. The accidental nature and abundance of reproduction, supplied with incidents like mutation, is a condition for natural selection. But if it would occur in a similar way during the growth of a plant, no plant would ever reach the adult stage. Dynamic development is a programmed and reproducible process. Sexual reproduction (as well as evolution according to Charles Darwin) is neither.[24]
Fertilization is a biotically qualified process, interlaced with biochemical processes having a biotic function. Moreover, in animals fertilization is interlaced with the psychically qualified mating behaviour that is biotically founded.
Encyclopaedia of relations and characters. 6. Organic characters
6.4. The secondary characteristic
of organisms
Because four relation frames precede the biotic one, we should expect four secondary types of biotically qualified thing-like characters. These are, respectively, quantitatively, spatially, kinetically, or physically founded. Each type is interlaced with the corresponding type of biotic processes mentioned in section 6.3. Moreover, the characters of different types are interlaced with each other as well.
a. It seems obvious to consider the cell to be the smallest unit of life. Each living being (viruses excepted) is either a cell or a composite of cells. However, this conceals the distinction between prokaryotes (bacteria and some algae) and eukaryotes. According to many biologists, this difference is more significant than that between plants and animals.[25] The oldest known fossils are prokaryotes, and during three-quarters of the history of the terrestrial biosphere, eukaryotes were absent. Prokaryotic cells are more primitive and usually smaller than eukaryotic cells. Most prokaryotes like bacteria are unicellular, although some colonial prokaryotes like algae exist. The protists, fungi, plants, and animals consist of eukaryotic cells. A bacterium has only one membrane, the cell wall. An eukaryotic cell has several compartments each enclosed by a membrane. Besides vacuoles these are particles like the cell nucleus, ribosomes (where RNA molecules assemble polypeptides), mitochondria (the power stations of a cell), and chloroplasts (responsible for photosynthesis). Prokaryotes have only one chromosome, eukaryotes more than one. Therefore, biologists consider the prokaryotes to belong to a separate kingdom, or even two kingdoms, the (eu)bacteria and the much smaller group of archaebacteria (archaea).
It appears that the chromosomes in an eukaryotic cell have a prokaryotic character, as well as the genetically more or less independent mitochondria and chloroplasts. Having their own DNA, the latter organelles’ composition is genetically related to that of the prokaryotes. These organelles are about as large as prokaryotic cells. RNA research indicates that mitochondria are genetically related to the purple group and chloroplasts to the cyanobacteria, both belonging to the eubacteria. The most primitive eukaryotes, the archaezoa, do not contain mitochondria of other organelles besides their nucleus. The similarity between prokaryotes and the organelles in eukaryotic cells was first pointed out by Lynn Margulis, in 1965.
Therefore, I consider the character of prokaryotes to be primarily biotic and secondarily quantitative. This may also apply to the characters of the mitochondria, chloroplasts, and chromosomes in an eukaryotic cell, and to the character of viruses as well. None of these can exist as a living being outside a cell, but each has its own character and a recognizable genetic identity.
Contrary to bacteria, viruses are not capable of independently assembling DNA, RNA, and polypeptides, and they can only reproduce parasitically in a cell. Some viruses can be isolated forming a substance that is only physically and chemically active. Only if a virus enters a cell, it comes to life and starts reproducing. Outside the cell, a virus is primarily physically qualified, having a biotic disposition, to be actualized within a cell. Because a virus mainly transports DNA, its character may be considered to have a (tertiary) kinetic disposition (like RNA). A virus has a characteristic shape differing from the shape of a cell. Their character has the tertiary disposition to become interlaced in that of an eukaryotic cell. In eukaryotic organisms, reproduction starts in the prokaryotic chromosomes.
b. A spatially founded biotic character is characterized by symbiosis (6.3). The symbiosis of prokaryotes in an eukaryotic cell is called endosymbiosis. In the character of an eukaryotic cell several quantitatively founded prokaryotic characters are encapsulated. In turn, eukaryotic cells are the characteristic units of a multicellular fungus, plant, or animal. Likewise, an atomic nuleus (having a spatially founded character) acts like a quantitative unit in the character of an atom (5.3). Each cell has a spatial (morphological) shape, determined by the functions performed in and by the cell.
In colonial plants (thallophytes like some kinds of algae), the cells are undifferentiated. As in colonial prokaryotes, metabolism takes place in each cell independent of the other cells. In higher organisms, eukaryotic cells have the disposition to differentiate and to integrate into tissues and organs. Both in cell division and in growth, cells, tissues, or organs emerge having a specific shape. The spatial expression of an organism is found in its morphology, of old a striking characteristic of living beings. Since the invention of the optical microscope in the seventeenth century and the electron microscope in the twentieth, the structure of a cell is well known.
c. Differentiated organisms and organs have a kinetically founded character. Except for unicellular and colonial organisms, each living being is characterized by its dynamic development from the embryonic to the adult phase. Now the replication of cells leads to morphological and functional differentiation. In a succession of cell divisions, changes in morphology and physiology of cells occur. Their tertiary character takes distance from that of the gametes. This gives rise to differentiated tissues and organs like fibres, the stem and its bark, roots, and leaves. These have different morphological shapes and various physiological functions. In a differentiated plant, metabolism is an organized process, involving many cells in mutually dependent various ways (6.2). Growth is a biotic process (6.3). Differentiation enhances the plant’s stability, fitness, and adaptive power.
Differentiation concerns in particular the various functions that we find in a plant. The biological concept of a function represents a subject-object relation as well as a disposition. Something is a biotic object if it has a function with respect to a biotic subject (6.2). Cells, tissues, and organs are biotic subjects themselves. A cell has the disposition to be a part of a spatially founded tissue, in which it has a function of its own. A tissue has an objective function in a differentiated organ. By differentiation the functions are divided between cells and concentrated in tissues. In a differentiated plant, chlorophyll is only found in leaves, but it is indispensable for the whole plant. The leaves have a position such that they catch a maximum amount of light.
Differentiation leads to the natural development from germination to death. The variety in the successive life phases of fertilization, germination, growth, maturing, reproduction, ageing, and natural death is typical for differentiated fungi, plants, and animals.
Although the cells of various tissues display remarkable differences, their kinship is large. This follows from the fact that many plants are able to reproduce asexually by the formation of buds, bulbs, stolons, tubers, or rhizomes. In these processes, new individuals emerge from differentiated tissues of plants. Grafting and slipping of plants are agricultural applications of this regenerative power.
d. Sexual reproduction appears to be an important specific projection of the genetic relation on the physical and chemical relation frame. This biotic interaction between two living beings is the most important instrument of biotic renewal. All eukaryotic organisms reproduce by sexual cell division (even if some species reproduce by other means most of the time). In prokaryotes, the exchange of genetic matter does not occur by sexual interaction, but by the merger of two individuals. Reproduction is a biotic process (6.3), and the part played by DNA replication is discussed in section 6.2.
In the highest developed plants, sexuality is specialized in typical sexual organs, like flowers, pistils, and stamens. Some plant species have separate male and female specimens. In sexually differentiated plants, the sexual relation determines the genetic cycle, including the formation of seeds. Fertilized seeds can exist some time independent of the parent plant without germinating, for instance in order to survive the winter. Sometimes they are provided with a hard indigestible wall, surrounded by pulp being attractive food for animals. The animal excretes the indigestible kernel, co-operating in the dispersal of the seeds.
In particular, sexual reproduction is relevant for the genetic variation within a population. This variation enhances the population’s adaptability considerably. The genetic kinship between individuals in a population is much less than the genetic relation between cells within an individual organism.
The characteristic distinctions between an egg cell and pollen, between male and female sex organs in bisexual plants, and between male and female specimens in unisexual plants, have a function to prevent the merger of sex cells from the same individual. In bisexual plants self-pollination does occur, but sometimes the genetic cycle is arranged such as to preclude this. Fungi are not sexually differentiated but have other means to prevent self-fertilization. Within each fungus species several types occur, such that only individuals of different types can fertilize each other.
The above presented distinction of four biotic types of thing-like characters is only the start of their analysis. Real characters almost always consist of an interlacement of differently typed characters.
First, one recognizes the interlacement of equally biotically qualified but differently founded characters. In eukaryotic cells, an interlacement occurs with various organelles having a prokaryotic character. Because the organelles have various functions, this interlacement leads to a certain amount of differentiation. In all multicellular plants, the character of the cells is interlaced into that of a tissue. In differentiated plants, the character of organs is interlaced with those of tissues. This concerns both their morphological structure and their physiological functions. The highest developed plants display an interlacement of cells, tissues, leaves, roots, flowers, and seeds. Together they constitute the organism, the plant’s primary biotic character. The differentiation of male and female organs or individuals is striking.
Second, the biotic organism is interlaced with characters that are not biotically qualified. First of all, these concern the physically qualified characters of the molecules composing the plant (6.2). Besides we find in a plant kinetic characters, typical motions of the plant as a whole or of its parts. An example is the daily opening and closing of flowers, or the transport of water from roots to leaves. Each plant and each of its cells, tissues, and organs have typical shapes. These characters are by no means purely physical, chemical, kinetic, or spatial. They are opened up by the biotic organism in which their characters are encapsulated. Their tertiary biotic disposition is more obvious than their primary qualifying or secondary founding relation frames. They have a function determined by the organism. Unlike cells and tissues, they do not form parts of the organism, as follows from the fact that they often persist some time after the death of the organism. Everybody recognizes the typical structure of a piece of wood to be of organic origin, even if the plant concerned is dead for a long time. Wood is not alive, but its physical properties and spatial structure cannot be explained from physical laws only. Wood is a product of a living being, which organism orders the physically qualified molecules in a typically biotic way.
Third, we encounter the interlacement of the organism with many kinds of biochemical and biotic processes (6.2, 6.3). Whereas physical systems always proceed to an equilibrium state, an organism is almost never at rest. (A boundary case is a seed in a quasi-stable state). Metabolism is a condition for life. Reproduction, development, and growth of a multicellular organism, and the seasonal metamorphosis of perennial plants, are examples of biotic processes. Each has its own character, interlaced with that of the organism.
The typology of characters differs from the biotic taxonomy. A relatively recent taxonomy of living beings still distinguished five kingdoms: monera (prokaryotes); protoctista or protista (unicellular and colonial eukaryotic organisms); fungi; animalia; and plantae.[26] Up till the sixties, besides the animal kingdom only one kingdom of plants was recognized, including the monera, protista, and fungi besides the ‘true’ plants.[27] Nowadays the prokaryotes are divided into the kingdoms of (eu)bacteria and archaebacteria or archaea, differing from each other as much as they differ from the eukaryotes. The protists form a set of mutually hardly related unicellular or colonial eukaryotes. Fungi are distinguished from plants by having haploid cells most of the time. Being unable to assimilate carbon, they depend on dead organic matter, or they parasitize plants or animals. DNA research reveals that fungi are more related to animals than to plants.
It cannot be expected that the typology discussed in this section would correspond to the biological taxonomy of species. Taxonomy is based on specific similarities and differences and on empirically found or theoretically assumed lineages and kinship. If the biotic kingdoms in the taxonomy would correspond to the division according to their secondary characteristic, this would mean that the four character types would have developed successively in a single line. In fact, many lineages evolve simultaneously. In each kingdom the actualization of animal phyla or plant divisions, classes, orders, etc. proceeds in the order of the four secondary character types and their interlacements. However, their disparity cannot be reduced to the typology based on the general relation frames.
The biological taxonomy, the division of species into genera, families, orders, classes, phyla or divisions, and kingdoms, is not based on the general typification of characters according to their primary, secondary, and tertiary characteristics. Rather, it is a specific typification, based on specific similarities and differences between species.
Encyclopaedia of relations and characters. 6. Organic characters
6.5. Populations
Sections 6.2 and 6.3 investigated physical, chemical, and biotic processes based on projections of the biotic relation frame on the preceding frames. Section 6.4, too, was mainly directed to secondary characteristics of biotic subjects. Now a tertiary characteristic will be considered, the disposition of organisms to adapt to their environment. Organisms do not evolve individually, but as a population in a biotope or ecosystem. Section 6.5 discusses the laws for populations and aggregates of populations, whereas section 6.6 treats the genome and the gene pool as objective aggregates.
a. A population is a homogeneous aggregate, a spatio-temporally bounded and genetically coherent set of living beings of the same species. Hence, a population is not a class but a collection. It is a spatial cross section of a lineage, which in turn is a temporally extended population.[28] Besides being genetically homogeneous, a population is also genetically varied, see below.
Two sets of organisms of the same species are considered different populations, if they are spatially isolated and the exchange of genetic matter is precluded. A population as a whole evolves and isolated populations evolve independently from each other.
A population is a quantitatively founded biotic aggregate, having a number of objective properties open to statistical research, like number, dispersion, density, birth rate, and death rate. These numbers are subject to the law of abundance. Each population produces much more offspring than could reach maturity. The principle of abundance is a necessary condition for survival and evolutionary change. Competition, the struggle for life, sets a limit to abundance.[29]
Being threatened by extinction, small populations are more vulnerable than larger ones. Nevertheless, they are better fit to adaptation. Important evolutionary changes only occur in relatively small populations that are reproductively isolated from populations of the same species. As a ‘founder population’, a small population is able to start a new species. Large, widely dispersed populations are evolutionary inert.[30]
b. A biotope or ecosystem is a heterogeneous aggregate. It is a spatially more or less bounded collection of organisms of different species, living together and being more or less interdependent. The biotic environment or habitat of a population consists of other populations of various species.
A biotope is characterized by the symbiosis of prokaryotes and eukaryotes, of unicellular and multicellular organisms, of fungi and plants. Most biotopes are opened up because animals take part in them, and sometimes because they are organized by human interference. Biotopes like deserts, woods, meadows, or gardens are easily recognizable.
A population occupies a niche in a biotope. A niche or adaptive zone indicates the living room of a population. Both physical and biotic circumstances determine a niche, in particular predator-prey relations and the competition about space and food. Each niche is both possible and constrained because of the presence of other populations in the same area. In general, the geographic boundaries of the habitats of different species will not coincide. Therefore the boundary of a biotope is quite arbitrary.
Each niche is occupied by at most one population. This competitive exclusion principle is comparable to Pauli’s exclusion principle for fermions (6.2, 6.4). If a population that would fit an occupied niche invades an ecosystem, the result is a conflict ending with the defeat of one of the two populations. Sooner or later, some population will occupy an empty niche.
If the physical or biotic environment changes, a population can adapt by genetically evolving or by finding another niche. If it fails it becomes extinct.
c. In each biotope, the populations depend on each other. Each biotope has its food chains and cycles of inorganic material. Fungi living mainly off dead plants form a kingdom of recyclers.[31] Many bacteria parasitize living plants or animals, which, conversely, often depend on bacteria. Sometimes the relation is very specific. For instance, a lichen is a characteristic symbiosis of a fungus and a green or blue alga.
The biotic equilibrium in an ecosystem may change by physical causes like climatic circumstances, by biotic causes like the invasion of a new species, or by human intervention. Like a physical equilibrium, the biotic balance has a dynamic character. If an ecosystem gets out of balance, processes start having the disposition to repair equilibrium or to establish a new equilibrium.
Sometimes the ecological equilibrium has a specific character, if two populations are more or less exclusively dependent on each other, for instance in a predator-prey relation. If the prey increases its number, the number of predators will grow as well. But then the number of prey will decrease, causing a decrease of predators. In such an oscillating bistable system, two ‘attractors’ appear to be active (5.5).
d. Individual organisms are not susceptible to genetic change, but populations are subject to evolutionary change. Besides competition, the driving force of this dynamic development is natural selection, the engine of evolution. With each genotype a phenotype corresponds, the external shape and the functioning of the individual plant. Rather than the genotype, the phenotype determines whether a plant is fit to survive in its environment and to reproduce. Fitness depends on the survival value of an individual plant at short term, and on its reproduction capability and the viability of its offspring.[32] Fitness is a long-term measure for the ability of a population to maintain and reproduce itself.
Natural selection concerns a population and acts on the phenotype. It has the effect that ‘the fittest survives’, as Herbert Spencer would have it. ‘Survival of the fittest’ is sometimes called circular.[33] ‘That which is fit survives, and that which survives is fit’.[34] This circularity may be caused by the fact that fitness is a primitive, undefinable concept in the theory of evolution.[35] Fitness is not definable, but it is measurable as reproductive success. The circularity may be removed by relating survival to an individual and fitness to its offspring: ‘the fit are those who fit their existing environments and whose descendants will fit future environments … in defining fitness, we are looking for a quantity that will reflect the probability that, after a given lapse of time, the animal will have left descendants’.[36]
Fitness is a quantitatively founded magnitude, lacking a metric. Fitness depends on the reproduction of an individual, and on that of its next of kin. The latter is called ‘inclusive fitness’, explaining the ‘altruistic’ behaviour of bees, for instance.
The struggle for life is a process taking place mostly within a population, much less between related populations (if occupying overlapping niches), and hardly ever between populations of different species.[37]
However, the evolution of a population depends on the environment, including the evolution of other populations. The phenomenon of co-evolution means that several lineages evolve simultaneously and mutually dependently. An example is the evolution of seed eating birds and seed carrying plants. The plant depends on the birds for the dispersion of its seeds, whereas the birds depend on the plants for their food. Sometimes, the relation is very specific.
Besides co-evolution, biologists distinguish divergent and convergent evolution of homologous respectively analogous properties.[38] Homology concerns a characteristic having a common origin. In related species, its function evolved in diverging directions. Analogy concerns a characteristic having a corresponding function but a different origin. The emergence of analogous properties is called convergent or parallel evolution. The stings of a cactus are homologous to the leaves of an oak, but analogous to the spines of a hedgehog. The wings of a bird and a bat are homologous to each other, but analogous to the wings of an insect. Light sensitivity or visual power emerged at least forty times independently, hence analogously, but the underlying photoreceptors may have arisen only once, they appear to be homologous.[39]
Encyclopaedia of relations and characters. 6. Organic characters
6.6. The gene pool
The insight that populations are the units of evolution is due to Charles Darwin and Alfred Wallace. It is striking that they could develop their theory of evolution without knowledge of genetics. Besides populations being subjective aggregates of living beings, in the biotic evolution objective aggregates play a part. These objective aggregates consist of genes. Six years after Darwin’s publication of The origin of species (1859), Gregor Mendel discovered the laws of heredity. These remained unnoticed until 1900, and only some time later they turned out to be the necessary supplements to the laws for populations.
Some populations reproduce only or mostly asexually (6.7). In section 6.6, I restrict myself to populations forming a reproductive community, a set of organisms reproducing sexually. Within and through a population, genes are transported, increasing and decreasing in number.
a. The genetic identity of each living being is laid down in its genome, the ordered set of genes (6.2). The genes do not operate independent of each other. Usually, a combination of genes determines a characteristic of the organism. In different phases of development, combinations of genes are simultaneously switched on of off. The linear order of the genes is very important. The number of genes is different in different species and may be very large. They are grouped into a relatively small number of chromosomes, each chromosome corresponding to a DNA molecule. The human genome consists of 23 chromosome pairs and about 30,000 genes. The genes take only 5% of the human DNA, the rest is non-coding ‘junk-DNA’, which (possibly stabilizing) function was not very clear at the end of the twentieth century. A prokaryote cell has only one chromosome. In eukaryotes, genes occur in the cell nucleus as well as in several organelles, such as the mitochondria. The organelles are considered encapsulated prokaryotes (6.4).
Genes are not subjectively living individuals like organisms, organs, tissues, cells, or even organelles. Richard Dawkins assumes that the ‘selfish genes’ are the subjects to evolution,[40] but Ernst Mayr objects: ‘The geneticists, almost from 1900 on, in a rather reductionist spirit preferred to consider the gene the target of evolution. In the past 25 years [of the twentieth century], however, they have largely returned to the Darwinian view that the individual is the principal target.’[41] Genes have an objective function in the character of a living cell. A genome should nor be identified with the DNA molecules forming its material basis, neither with a gene with a sequence of bases. ‘The claim that genetics has been reduced to chemistry after the discovery of DNA, RNA, and certain enzymes cannot be justified … The essential concepts of genetics, like gene, genotype … are not chemical concepts at all …’[42]
Confusion arises from using the same word for a sequence of nucleotides in a DNA molecule and its character, the pattern. In all cells of a plant the DNA molecules have the same pattern, the same character, which is called the plant’s genome. Likewise, a gene is not a sequence of nucleotides, nor a particle in a physical or chemical sense, but a pattern of design. The same gene, the same pattern can be found at different positions in a genome, and at the same locus one finds in all cells of a plant the same pair of genes. Each gene is the design for a polypeptide, and the genome is the design of the organism.
The biotic character of the genome is interlaced with the chemical character of DNA molecules. The genome or genotype determines the organism’s hereditary constitution. The phenotype is developed according to the design expressed in the genome. Both phenotype and genotype refer to the same individual organism.[43]
Nevertheless, genes have their own objective individuality. In asexual cell division, the genome remains the same. The parent cell transfers its genetic individuality to the daughter cells. In sexual reproduction, objective individual genes are exchanged and a new subjective individual organism emerges.
b. A population is characterized by the possibility to exchange genes and is therefore the carrier of a gene pool. Although the members of the population belong to the same species, they are genetically differentiated. In a diploid cell, a DNA molecule consists of a double helix. At each position or locus there are two genes. These genes may be identical (homozygote) or different (heterozygote). Different genes that can occupy the same locus in different organisms in a population are called alleles. Some alleles dominate others. The distribution of the alleles over a population determines their genetic variation, satisfying Gregor Mendel’s laws in simple cases. In sexual reproduction, the pairs of genes separate, in order to form new combinations in the new cell (6.3).
At any time, the gene pool is the collection of all genes present in the population. The exchange of alleles in sexual reproduction leads to changes in the frequencies within the gene pool, but does not change the genes themselves. For changing genes, several other mechanisms are known, such as mutation, crossing-over, and polyploidy.
Mutations may have a physical cause (e.g., radioactivity), or a biotic one (e.g., a virus). Mutations are usually indifferent or even lethal, but sometimes enriching. For every gene, they are very rare, but because there are many genes in an individual and even more in a gene pool, they contribute significantly to the variation within a species. Crossing-over means a regrouping of genes over the chromosomes. Polyploidy means that a DNA molecule consists of more than two strings, on each or some loci there are three genes in stead of two. Usually, the location of the genes does not change. It is a specific property of the species. Hence, the way genes co-operate is also specific for a species.
A population in which sexual reproduction occurs without constraints is subject to the statistical law of Godfrey Hardy and Wilhelm Weinberg (1908): on a certain locus in the genome the frequency of the alleles in the gene pool in a stable population is constant, generation after generation. Only selective factors and hybridization with another population may disturb the equilibrium.[44] Populations are hardly ever in equilibrium. The relevance of the law of Hardy and Weinberg is that deviations point to equilibrium disturbing factors. In small populations ‘genetic drift’ occurs, changes in the gene pool caused by accidental circumstances.
Hybridization between related species or different populations of the same species give rise to a new species or race if three conditions are met. First, the hybrids are fertile. Second, there is a niche available in which the hybrids are better adapted than the original population. Third, the new combination of genes becomes isolated and sufficiently stabilized to survive.[45] Usually, hybridization is impossible, because the offspring is not viable, or because the offspring is not fertile, or because the offspring has a decreasing fertility in later generations.
Observe that the organisms determine the frequency of the genes in the pool. The character of each gene is realized in DNA. Still, it makes no sense to count the number of DNA molecules in a population, because DNA is found in each cell and most cells have no significance for the gene pool. Even the number of gametes is irrelevant for calculating the gene frequency. The frequency of genes in the pool is the weighed frequency of the organisms in the population, being the carriers of the gene concerned. ‘A community of interbreeding organisms is, in population genetic terms, a gene pool.’[46] For instance, if at a certain locus a gene occurs once in 10% of the organisms and twice in 10%, the gene has a frequency of 15% in the gene pool, because each locus contains two alleles. (The complication that on different loci the same gene may occur is left out of consideration in this example.) By natural selection, the frequency of a gene may increase or decrease, depending on the fitness of the organisms in the population.
c. Because of external circumstances, the gene pool may change very fast. Within a few generations, the distribution of a gene pair AB may change from 90% A, 10% B into 10% A, 90% B. This means that a population is able to adapt itself to changes in its habitat, and to increase its chances of survival and reproduction. In a radical environmental change (in particular if a part of the population is isolated), hereditary variation within a species may give rise to the realization of a new species. Hence, adaptation and survival as concepts in the theory of evolution do not concern individual organisms (being genetically stable), but populations. Only populations are capable of genetic change.
Natural selection as such is not a random process,[47] but it is based on at least two random processes, to wit mutation and sexual reproduction. Which alleles combine in mating rests on chance. The enormous amounts of cells involved in reproduction compensate for the relatively small chance of progress.
e. The phenotype (not the genotype) determines the chance of survival of an organism in its environment. The phenotype is the coherent set of the functions of all parts of the organism, its morphology, physiology, and its ability to reproduce. The genotype generates the phenotype, whereby development and environment factors play an additional but important part. Natural selection advances some phenotypes at the cost of others, leading to changes in the gene pool. Together with changes in the genes themselves, natural selection induces small changes in each generation to accumulate to large changes after a large number of generations.
The received theory of evolution emerged shortly after 1930 from a merger of Charles Darwin’s theory of natural selection with genetics and molecular biology. It presupposes that evolution occurs in small steps. Major changes consist of a sequence of small changes. In many cases, this is an acceptable theory. Nevertheless, it would be honest to admit that there is no biological explanation available for the emergence of prokaryotes (about three billion years ago); of eukaryotes (circa one billion years ago); of multicellular organisms (in the Cambrium, circa 550 million years ago); of sexual reproduction; of animals; and of the main animal phyla, plant divisions, classes, and orders. At the end of the twentieth century, the empirical evidence available from fossils and DNA sequencing is not sufficient to arrive at theories withstanding scientific critique.
Encyclopaedia of relations and characters. 6. Organic characters
6.7. Does a species correspond
with a character?
In this encyclopaedia, a natural character is defined as a set of laws determining an ensemble of possibilities besides a class of individuals (1.2). A class and an ensemble are not restricted in number, space, and time. They do not change in the course of time and do not differ at different places. A population is not a class but a collection. Hence, it does not correspond to a character. The question of whether a species corresponds to a character is more difficult to answer. ‘There is probably no other concept in biology that has remained so consistently controversial as the species concept.’[48] Philosophers interpreting the concept of a natural kind in an essentialist way rightly observe that a biotic species does not conform to that concept. However, the idea that a character is not an essence but a set of laws sheds a different light on the concept of a species. The main problem appears to be that insufficient knowledge is available of the laws determining species. Instead, one investigates the much better accessible subject and object side of these unknown laws.
Generally speaking, biologists have a realist view on sets, considering a species to be a natural set. Each living being belongs to a species, classified according to a variety of practical criteria, which do not always yield identical results. Besides, there are quite a few theoretical definitions of a species.[49] The distinction between operational criteria used in practice and theoretical definitions is not always sharp. Practice and theory are mutually dependent. However, not distinguishing them gives rise to many misunderstandings. ‘… the species problem results from confusing the concept of a species itself with the operations and evidence that are used to put that concept in practice.’[50]
Criteria to distinguish species from each other are grouped into genealogical (or phylogenetic), structural, and ecological criteria. This corresponds more or less to a division according to primary, secondary, and tertiary characteristics as defined in chapter 1.
Species can be distinguished because they show distinctive, specific properties. These are regular, therefore lawlike. This is not merely interesting for biologists. In particular in sexual relationships, animals are able to distinguish other living beings from those of their own kind.
Primary criteria to distinguish species are genealogical. The biological taxonomy is based on empirical or theoretically established lineages. A population is a segment of a lineage. A taxon (for instance, a species, genus, family, order, or phylum) is defined as a set of organisms having a common ancestry. A monophyletic taxon or clade comprises all and only organisms having a common ancestry. Birds and crocodiles are monophyletic, both apart and together. A set of organisms lacking a common ancestry is called polyphyletic. Such a set, like that of all winged animals, is not suited for taxonomy. A taxon consisting of some but not all descendants of a common ancestor is called paraphyletic. For instance, reptiles have a common ancestry, but they share it with the birds, which are not reptiles. Opinions differ about the usefulness of paraphyletic taxons.
The biological taxonomy clearly presupposes genetic relations to constitute a general biotic relation frame. ‘… the general lineage concept is a quintessential biological species concept: inanimate objects don’t form lineages.’[51] Descent providing the primary, genealogical criterion for a species has two important consequences.
The first consequence is seldom explicitly mentioned, but always accepted. It is the assumption that an individual living being belongs to the same species throughout its life. (It may change of population, e.g., by migration.) This means that species characteristics cannot be exclusively morphological. In particular the shape of multicellular fungi, plants, and animals changes dramatically during various phases of life. The metamorphosis of a caterpillar into a butterfly is by no means an exception. The application of similarities and differences in taxonomy has to take into account the various phases of life of developing individuals.
Second, as a rule each living being belongs to the same species as its direct descendants and parents. Therefore the dimorphism of male and female specimens does not lead to a classification into different species. A very rare exception to this rule occurs at the realization of a new species. A minimal theoretical definition says that a species necessarily corresponds to a lineage, starting at the moment it splits off from an earlier existing species, and ending at its extinction.[52]
If this minimal definition would be sufficient as well as necessary, a species would be a collection, like a population bounded in number, space, and time. But this definition cannot be sufficient, because it leaves utterly unclear what the splitting of a species means. Branching alone is not a sufficient criterion, because each lineage branches (an organism has various descendants, and in sexual replication each organism has two parents, four grandparents, etc.). According to the primary criterion alone, the assumption that all organisms are genetically related would mean that either all organisms belong to the same species, or each sexual reproduction leads to a new species. Hence, additional secondary and perhaps tertiary criteria are needed to make clear, which kind of branching leads to a new species.[53] ‘In effect, the alternative species definitions are conjunctive definitions. All definitions have a common primary necessary property – being a segment of a population-level lineage – but each has a different secondary property – reproductive isolation, occupation of a distinct adaptive zone, monophyly, and so on.’[54]
The most practical criteria are structural. It concerns similarities and differences based on the DNA-structure (the genotype), besides the shape (morphology), and processes (physiology, development), making up the phenotype. In DNA and RNA research, biologists look at similarities and differences with respect to various genes and their sequences, taking into account the locus where they occur. The comparison of genes at different loci does not always give the same results. Hence people should be cautious with drawing conclusions. It should be observed that DNA and RNA research is usually only possible with respect to living or well-conserved cells and only establishes more or less contemporary relations (in a geological sense: ‘contemporary’ may concern millions of years). This also applies to other characteristics that cannot be fossilized, like behaviour. Non-contemporary similarities and differences are mostly restricted to morphological ones. For the agreement between various related species, homologies are very important (6.6).
Many biologists accept as a decisive distinction between species the existence of a reproductive gap between populations. ‘A species is a reproductive community of populations (reproductively isolated from others) that occupies a specific niche in nature.’[55] Within a species, individuals can mate fruitfully with each other, whereas individuals of different species cannot. Ernst Mayr mentions three aspects of a biotic species. ‘The first is to envision species not as types but as populations (or groups of populations), that is, to shift from essentialism to population thinking. The second is to define species not in terms of degree of difference but by distinctness, that is, by the reproductive gap. And third, to define species not by intrinsic properties but by their relation to other co-existing species, a relation expressed both behaviorally (noninterbreeding) and ecologically (not fatally competing).’[56] ‘The word “species” … designates a relational concept’.[57] According to this definition, horses and donkeys belong to different species. A horse and a donkey are able to mate, but their offspring, the mules, are not fertile. The mentioning of populations is relevant. The reproduction gap does not concern individuals but populations.
Sometimes, a population A belongs to the same species as population B, B ditto with C, but C does not with respect to A.[58] Hence, the concept of a species according to this criterion is not always transitive. The possibility to mate and having fertile descendants is only relevant for simultaneously living members of a population. Hence it serves as a secondary addition to the primary genealogical criterion, stating that organisms living long after each other (and therefore unable to mate) may belong to the same species. Taking this into account, the mentioned lack of transitivity can be explained by assuming that one of the populations concerned is in the process of branching off. After some time, either A or C may become to belong to an independent species.
The reproduction gap is in many cases a suitable criterion, but not always. First, some species only reproduce asexually. This is not an exception, for they include the prokaryotes (the only organisms during three-quarters of the history of life on earth).[59] Second, many organisms that experts rank to different species are able to fertilize each other. Hybrid populations are more frequent in plants than in animals. The reproductive gap is in animals more pronounced than in plants, because of the animals’ mating behaviour and the corresponding sexual dimorphy. Mating behaviour leads to the ‘recognition species concept’.[60]
A tertiary criterion concerns the disposition of a species to find a suitable niche or adaptive zone (6.5). How organisms adapt to their environment leads to the formulation of ecological criteria to distinguish species. This is a relational criterion too, for adaptation does not only concern physical (e.g., climatic) circumstances, but in particular the competition with individuals of the same or of a different species.
Biologists and monist biophilosophers look after a universal concept of a species. According to Hull, the concept of a species ought to be universal (applicable to all organisms), practical in use, and theoretically significant.[61] He observes that monists are usually realists, pluralists being nominalists.[62] Supposing that a species corresponds to a character, it should be primarily biotically qualified. No difference of opinion is to be expected on that account. But what should be its secondary characteristic? Considering the analysis in section 6.4, for prokaryotes the quantitative relation frame comes to mind (cell division); for unicellular or colonial eukaryotes the spatial frame (morphological shape and coherence); for differentiated plants the kinetic frame (physiology and development); finally, for sexually specialized plants and animals the physical relation frame (the reproductive gap). A species can only be a universal biotic character if the concept of a species is differentiated with respect to secondary and tertiary characteristics. For instance, the secondary criterion based on the reproductive gap is only applicable to sexually reproducing organisms. The pluralistic concept of a species finds its origin in the fact that all secondary and tertiary criteria are restrictively applicable, whereas the universal primary criterion is necessary but not sufficient.[63] Likewise, the physical concept of natural kinds is not universal. For quantitatively, spatially, and kinetically founded characters, different secondary criteria apply (chapter 5).
Some philosophers assume that species are comparable with organisms and they consider a species to be a biotic individual.[64] ‘When species are supposed to be the things that evolve, they fit more naturally in the category individual (or historical entity) than the category class (or kind).’[65] Hull assumes a duality: ‘Classes are spatio-temporally unrestricted, whereas individuals are spatio-temporally localized and connected. Given this fairly traditional distinction, we argued that species are more like individuals than classes’.[66] Clearly, Hull does not distinguish between aggregates and individuals.
A species comes into being by branching off from another species, and it decays at extinction. Species change during their existence. It is true that these processes depend entirely on the replication of the organisms that are part of the species, but that applies to multicellular organisms as well, whose development and growth depend on the reproduction of their cells.
Organisms belonging more or less simultaneously to the same species form a population. Usually a population is a geographically isolated subset of a lineage, a set of organisms having the same ancestry. Both populations and lineages are temporal collections of individuals, not timeless classes. They are aggregates as well, because their members are genetically related. However, an aggregate is not always an individual, and it is always a set of individuals. If considered as a lineage or population (or a set of populations), a species is a temporal collection of individual organisms, subject to biotic laws. I shall not contest this vision that stresses the subject side of a species. But it does not answer the question of whether a species has a law side as well, corresponding with a character.
Both lineages and populations are products of a biotic dynamic evolution. Natural selection, genetic drift, and ecological circumstances explain how lineages emerge, change, and disappear. Geographic isolation explains the existence of various populations belonging to the same species. But natural selection, genetic drift, or geographic isolation does not explain why a group of living beings is viable in the circumstances constituting an adaptive zone. Unavoidably, such an explanation takes its starting point from law statements, whether hypothetical or confirmed. These propositions may very well include the supposition that a lineage and its populations are spatio-temporal subsets of a timeless class, without violating the received facts and theories of evolution and genetics. The character of this class determines an ensemble of possibilities, partly realized in the individual variation occurring in a population.
No more than species, the chemical elements have been realized from the beginning of the universe. Only after the universe was cooled down sufficiently, protons and electrons could form hydrogen and helium. Only after the formation of stars, hydrogen and helium nuclei fused to become heavier nuclei. Nuclear physics provides a quite reliable picture of this chemical evolution. Doubtless, each isotope satisfies a set of laws constituting a character. I believe that the same applies to biotic species, although the complexity of organisms makes it far more difficult to state in any detail which laws constitute a biotic character. Based on an essentialist interpretation, Ernst Mayr turns down the analogy of the species concept in biology with that of mineralogy or chemistry: ‘For a species name in mineralogy is on the whole a class name, defined in terms of a set of properties essential for membership in the class.’[67]
The crossing of a barrier between two species has an analogy in the well-known phenomenon of tunneling in quantum physics (5.7). An energy barrier usually separates a radioactive nucleus from a more stable nucleus. This barrier is higher than the energy available to cross it. According to classical mechanics, a nucleus could never cross such a barrier, but quantum physics proves that there is a finite (even if small) probability that the nucleus overtakes the barrier, like a car passes a mountain through a tunnel. A similar event occurs in the formation of molecules in a chemical reaction. In this case, the possibility to overtake the energy barrier depends on external circumstances like the temperature. The presence of a catalyst may lower the energy barrier. In biochemical processes enzymes have a comparable function. The possibility that an individual physical or chemical thing changes of character is therefore a fact, both theoretically and experimentally established. Moreover, in all chemical reactions molecules change of character, dependent on circumstances like temperature.
Similarly, at the realization of a new species, circumstances like climate changes may enhance or diminish the probability of overcoming one or more constraints. A small, geographically isolated population will do that easier than a large, widely dispersed population. Since 1972, biology knows the theory of ‘punctuated equilibrium’. From paleontological sources, Niles Eldredge and Stephen Gould derived that in a relatively short time (compared to much longer periods of stable equilibrium), a major transition from one species to another may occur.[68] Such a transition would take 50,000 years or more, whereas a stable period may last millions of years.[69]
Quantum physics explains the transition from one character to the other by tunneling, but tunneling does not explain the existence of the characters concerned. Natural selection explains why constraints can be overcome, not why there are constraints, or which types of constraints are operative. Natural selection explains changes within species and from one species to the other, but not why there are species, and which species exist. On the contrary, the existence of species is a condition for the action of natural selection. Populations change within a species, and sometimes they migrate from one species to another one, and its motor, its dynamic force, is natural selection. However, natural selection does not explain everything. The success of natural selection is only explicable by assuming that a population after adaptation is in a more stable equilibrium with its environment than before. What is stable or better adapted, why the chances of survival of an organism increase by a change induced by natural selection, cannot be explained by natural selection itself. Natural selection explains why a population changes its gene pool, but it does not explain why the new situation is more viable. To explain this requires research into the circumstances in which the populations live and into the characters that determine the species.
On the one side, the standard theories about evolution, genetics, ecology, and molecular biology do not exclude the possibility that each species corresponds to a character, a set of laws defining an ensemble of possibilities, sometimes (and never exhaustively) realized by a population of organisms. After all, ‘by far the commonest fate of every species is either to persist through time without evolving further or to become extinct.’[70]
On the other hand, it cannot be proved that a species corresponds to a character. That would only be empirically demonstrable. The idea that an empirical species is a subset of a class subject to a specific set of laws can only be confirmed by pointing out such laws. Evolutionists have a tendency to deny the existence of biotic laws.[71] In contrast, Paul Griffiths asserts that there are laws valid for taxonomy.[72] Michael Ruse stresses that biology needs laws no less than the inorganic sciences. He mentions Gregor Mendel’s laws as an example.[73] And Marc Ereshefsky observes that ‘… there may be universal generalizations whose predicates are the names of types of basal taxonomic units … So though no laws exist about particular species taxa, there may very well be laws about types of species taxa.’[74] For instance, both genetics and development biology look for lawful conditions concerning the constitution of genes and chromosomes determining the phenotype of a viable organism belonging to a species. That is because the biotic expression of a character is a natural design, the genome, objectively laid down in the species-specific DNA.
Natural selection may be considered a random push for the dynamic development of populations of living beings. This development also requires the specific lawful pull of the species concerned.
Should we not consider the ascription of an unchangeable and lawful character to species a relapse into essentialism?[75] Ernst Mayr observes that in Carl Linnaeus’ taxonomy the genera are defined in an essentialist way.[76] He quotes from Linnaeus’ Philosophia Botanica (1751): ‘The ‘character’ is the definition of the genus, it is threefold: the factitious, the essential, and the natural. The generic character is the same as the definition of the genus … The essential definition attributes to the genus to which it applies a characteristic which is very particularly restricted to it, and which is special. The essential definition [character] distinguishes, by means of a unique idea, each genus from its neighbors in the same natural order.’[77]
Essentialism is a theory ascribing a priori an autonomous existence to plants, animals, and other organisms. Their essence is established on rational grounds, preceding empirical experience. Essentialism presupposes the possibility to formulate necessary and sufficient conditions for the existence of each species. The conditions for any species should be independent of the conditions for any other species.[78] This view differs widely from the idea of a character being a specific set of laws. With respect to the subject side, as far as essentialism excludes evolution, the theory of characters is by no means essentialist.
According to Aristotelian essentialism, each species would be autonomous. Biologists and philosophers seem to assume that this paradigm is still applicable to physics and chemistry. But physical things can only exist in interaction with other things, and the actual realization of physically qualified characters is only possible if circumstances permit. For instance, in the centre of the sun no molecules can exist, but we can only say that assuming that the laws which determine the possible existence of molecules are valid within the sun as much as elsewhere. The astrophysical and chemical theories of evolution assume that physical things emerged gradually, analogous to organisms in the biotic evolution. Nevertheless, it is generally accepted that particles, atoms, molecules, and crystals are subject to laws that are everywhere and always valid.
Physical and chemical things can only exist in interaction with each other in suitable circumstances. Similarly, living organisms can only exist in genetic relations with other organisms, permitting the circumstances. Each living organism would perish in absence of other living beings, and no organism can survive in an environment that does not provide a suitable niche.
My reasons to consider a species to be a character are a posteriori, based on scientific arguments open to empirical research. It is a hypothesis, like any other scientific assumption open to discussion. And it is a hypothesis leaving room for the evolution of a population within a species as well as from one species to another one. It is a hypothesis fully acknowledging Charles Darwin’s great discovery of natural selection. Moreover, this hypothesis recognizes the importance of environmental circumstances both determining possibilities and their realization. The laws are not the only conditions for existence. Physical and ecological circumstances are conditions as well. The realization of species can only occur in a certain order, with relatively small transitions. In this respect, too, the evolution of species does not differ from the evolution of chemical elements.
Although essentialists are able to take circumstances into account, the theory of characters moves ahead. The possibilities offered by a character are not merely realizable if the circumstances permit, but the ecological laws are partly the same as the laws constituting the character of a species. The laws forming a character for one species are not separated from the laws forming the character of another species, or from the laws determining biotic processes. Essentialism supposes that each species can be defined independent of any other species.
It is undeniable that my hypothesis runs counter to the kind of evolutionism that denies the existence of constant laws. From the above discussion it will be clear that I do not criticize Darwin’s theory and its synthesis with genetics and molecular biology. By natural selection, the theory of evolution explains the actual dynamic process of becoming and the evolution of populations. I believe that this theory does not contradict the view that species correspond to unchangeable characters and their ensembles. On the contrary, I believe that the facts corroborate the proposed model better than a radical evolutionism denying the existence of laws. The hypothesis that unchangeable laws dominate the species can be investigated on empirical grounds. This discussion belongs to the competence of empirical science.
The answer to the question of whether a species corresponds to a character does not depend on the acceptance or rejection of the belief that characters – not only biotic species – consist of laws given by God. The empirical approach that I advocate is at variance with the creationist view assuming a priori that the species are unchangeable, rejecting any theory of evolution. Creationism uses the Bible as a source of scientific knowledge preceding and superseding scientific research. It contradicts the view that the problem of whether species correspond to constant characters can only be solved a posteriori, based on scientific research.
For the time being, I am inclined to conclude that a species at the law side corresponds with a biotically qualified character, an unchangeable set of laws. The least one can say is that the recognition of a species or a higher taxonomical unit requires an insight into the regularities which make an organism to belong to that category. At the subject side, a species is realized by a lineage, an aggregate of individual organisms, hence with a collection, bounded in number, space, and time.
Evolution means the subjective realization of species. Natural selection is its motor and explains how species are realized. Whether a species is realizable at a certain place and time depends on the character of the species; on the preceding realization of a related species (on which natural selection acts); on the presence of other species (the ecological environment); and on physical circumstances like the climate (the physical environment).
I have no intention to suggest that the biotic evolution is comparable to the astrophysical and chemical evolution in all respects. I conceive of each evolution as a realization of possibilities and dispositions. But the way by which this occurs is strongly different. For physical and chemical things and events, interaction is decisive, including circumstances like temperature and the availability of matter and energy. The biotic evolution depends on sexual and asexual reproduction, with the possibility of variation and natural selection.
Another difference concerns the reproducibility of evolution. The physical evolution of the chemical elements and of molecules repeats itself in each star and each stellar system. In contrast, it is often stated that the biotic evolution is unique and cannot be repeated. It may be better to assert that the actual course of the biotic evolution is far more improbable than that of the physical and chemical ones. Comparable circumstances – a condition for recapitulation – never or hardly ever occur in living nature. In particle accelerators the astrophysical evolution is copied, the chemical industry produces artificial materials, agriculture improves races, in laboratories new species are cultivated, and the bio-industry manipulates genes. All this would be difficult to explain if one loses sight of the distinction between law and subject.
As a character, a biotic design is a set of laws, but for a scientist this does no longer imply a divine designer.[79] Whereas this does not solve the question of the origin of the natural laws, natural science became liberated from too naive views about the observability of divine interventions in empirical reality.
Essentialism survived longest in the plant and animal taxonomy. Until the middle of the twentieth century, this considered the system of species, genera, families, classes, orders, and phyla or divisions to be a logical classification. In this classification, each category was characterized by one or more essential properties. ‘The essentialist species concept … postulated four species characteristics: (1) species consist of similar individuals sharing in the same essence; (2) each species is separated from all others by a sharp discontinuity; (3) each species is constant through time; and (4) there are severe limitations to the possible variation of any one species.’[80]
Biological essentialism is not a remains of the Middle Ages, but a fruit of the Renaissance. From John Ray to Carl Linnaeus, many realistic naturists accepted the existence of unchangeable species. Besides, other biologists had a nominalist view of species.[81]
Ray and Linnaeus were more (Aristotelian) realists than (Platonic) idealists. Ernst Mayr ascribes the influence of essentialism to Plato.[82] ‘Without questioning the importance of Plato for the history of philosophy, I must say that for biology he was a disaster.’[83] Mayr shows more respect for Aristotle, who indeed has done epoch-making work for biology.[84] However, Aristotle was an essentialist no less than Plato was.
The difficulty that some philosophers have with the modern concept of a species can be reduced to a conscious or subconscious allegiance to an essentialist view. The difficulty that some biologists have with the idea of natural law is their abhorrence of essentialism.[85] Therefore, it is important to distinguish essence from lawfulness. The ‘essential’ (necessary and sufficient) properties do not determine a character. Rather, the laws constituting a character determine the objective properties of the things or processes concerned. ‘Essentialism with respect to species is the claim that for each species there is a nontrivial set of properties of individual organisms that is central to and distinctive of them or even individually necessary and jointly sufficient for membership in that species.’[86] Paul Griffiths contests the view that there are no natural laws (in the form of generalizations allowing of counterfactuals) concerning taxonomy.[87] Definition of a natural kind by properties may have a place in natural history, but not in a modern scientific analysis based on theories, in which laws dominate, not properties. The identification of a class by necessary and sufficient conditions is a remnant of rationalistic essentialism, These properties, represented in an ensemble, may display such a large statistical variation that necessary and sufficient properties are hard to find.[88] Moreover, the laws and properties do not determine essences but relations.
A second reason why some biologists are wary of the idea of natural law is that they (like many philosophers) have a physicalist view of laws. Rightly, they observe that the physical and chemical model of a natural law is not applicable to biology.[89] To the nineteenth-century physicalist idea of law belonged determinism and causality. However, determinism is past, and causality is no longer identified with law conformity but is considered a physical relation.
The theory of evolution is considered a narrative about the history of life, rather than a theory about processes governed by natural laws. ‘Laws and experiments are inappropriate techniques for the explication of such events. Instead, one constructs a historical narrative, consisting of a tentative construction of the particular scenario that led to the events one is trying to explain.’[90] But probably biologists will not deny that their work consists of finding order in living nature.[91] ‘Biology is not characterized by the absence of laws; it has generalizations of the strength, universality, and scope of Newton’s laws: the principles of the theory of natural selection, for instance.’[92] About M.B. Williams’ axiomatization of the theory of evolution,[93] Alexander Rosenberg observes: ‘None of the axioms is expressed in terms that restrict it to any particular spatio-temporal region. If the theory is true, it is true everywhere and always. If there ever were, or are now, or ever will be biological entities that satisfy the parent-of relation, anywhere in the universe, then they will evolve in accordance with this theory (or else the theory is false).’[94] But concerning the study of what is called in this book ‘characters’, Rosenberg believes that these ‘… are not to be expected to produce general laws that manifest the required universality, generality, and exceptionlessness.’[95] Yes indeed, these concern specific laws. Evolutionists tend to deny the existence of biotic laws.[96] However, Martin Ruse stresses that biology is no less than the inorganic sciences in need of laws.[97] He points to Gregor Mendel’s laws for an example. Bernhard Rensch gives a list of about one hundred biological generalizations. Paul Griffiths asserts that there are laws valid for taxonomy.[98] Marc Ereshefsky observes that ‘… there may be universal generalizations whose predicates are the names of types of basal taxonomic units … So though no laws exist about particular species taxa, there may very well be laws about types of species taxa.’[99]
The theory of evolution would not exist without the supposition that the laws for life, that are now empirically discovered, held millions of years ago as well. The question of whether other planets host living organisms can only arise if it is assumed that these laws hold there, too.[100]
A third reason may be the assumption that a law only deserves the status of natural law, if it holds universally and is expressible in a mathematical formula. A mathematical formulation may enlarge the scope of a law statement. Yet the idea of natural law does not imply that it has necessarily a mathematical form. Neither should a law apply to all physical things, plants, and animals. Every regularity, every recurrent design or pattern, and every invariant property is to be considered lawful. In particular each character expresses its own specific law conformity. In the theory of evolution biologists apply whatever patterns they discover in the present to events in the past. Hence they implicitly acknowledge the persistence of natural laws, also in the field of biology.
Anyhow, Charles Darwin was not wary of natural laws. At the end of his On the origin of species he wrote:
[1] Rudwick 2005; 2008.
[2] Mayr 1982, 56.
[3] Mayr 1982, 629.
[4] Jacob 1970, 4.
[5] Rensch 1968, 35.
[6] Raff 1996, chapter 8.
[7] Farley 1974; Bowler 1989.
[8] Ruse 1973, 118-121.
[9] Monod 1970, 102-103.
[10] Rosenberg 1985, 136-152.
[11] Rosenberg 1985, 137-138.
[12] Panchen 1992, chapter 9.
[13] Dawkins 1983, 16.
[14] Kitcher 1993, 270.
[15] Rosenberg 1985, 38-43.
[16] Hull 1974, 15-19.
[17] McFarland 1999, 27-29.
[18] Dawkins 1986, 295-296; McFarland 1999, 27.
[19] Raff 1996, chapter 10.
[20] Raff 1996, 27.
[21] Griffiths, Gray 1994.
[22] Griffiths, Gray 1994.
[23] Raff 1996, 260.
[24] Raff 1996, 23.
[25] Mayr 1982, 140, 244; Margulis, Schwartz 1982, 5-11; Ruse 1982, 169-171.
[26] Margulis, Schwartz 1982.
[27] Greulach, Adams 1962, 28.
[28] de Queiroz 1999, 53-54.
[29] Darwin 1859, chapter 3.
[30] Mayr 1982, 602.
[31] Purves et al. 1998, chapter 28: ‘Fungi: A kingdom of recyclers.’
[32] McFarland 1999, 72.
[33] Popper 1974, 137.
[34] Dampier 1929, 319.
[35] Rosenberg 1985, chapter 6; Sober 1993, 69-73.
[36] McFarland 1999, 78.
[37] Darwin 1859, chapter 4.
[38] Panchen 1992, chapter 4.
[39] Mayr 1982, 611; Raff 1996, 375-382.
[40] Dawkins 1976
[41] Mayr 2000, 68-69; Sober 1993, chapter 4.
[42] Mayr 1982, 62.
[43] Ruse 1982, 21, 30, 200-207.
[44] Hull 1974, 57-58; Ridley 1993, 87-92, 131-132.
[45] Ridley 1993, 42-43.
[46] Ridley 1993, 387.
[47] Dawkins 1986, 43, 62.
[48] Mayr 1982, 251. On the biological species concept, see Mayr 1982, chapter 6; Rosenberg 1985, chapter 7; Ereshefsky 1992; Ridley 1993, chapter 15; Wilson (ed.) 1999.
[49] Panchen 1992, 337-338 mentions seven species concepts, others count more than twenty, see Hull 1999.
[50] de Queiroz, 1999, 64.
[51] de Queiroz 1999, 77.
[52] de Queiroz 1999; Mishler, Brandon, 1987, 310.
[53] Ereshefsky 1992, 350.
[54] de Queiroz 1999, 60, 63.
[55] Mayr 1982, 273.
[56] Mayr 1982, 272.
[57] Mayr 1982, 286.
[58] Ridley 1993, 40-42.
[59] Nanney 1999.
[60] Ridley 1993, 392-393.
[61] Hull 1999, 38-39
[62] Hull 1999, 25.
[63] Dupré 1999.
[64] Rosenberg 1985, 204-212; Ridley 1993, 403-404.:
[65] Hull 1999, 32. For a criticism, see Mishler, Brandon, 1987; de Queiroz, Donoghue, 1988; Sober 1993, 149-159; de Queiroz 1999, 67-68.
[66] Hull 1999, 32-33.
[67] Mayr 1982, 251.
[68] Gould, Vrba 1982; Ridley 1993, chapter 19; Strauss 2009, 487-496.
[69] Stebbins 1982, 16-21.
[70] Stebbins 1982, 23.
[71] Dawkins 1986, 10-15.
[72] Griffiths 1966.
[73] Ruse 1973, 24-31.
[74] Ereshefsky 1992, 360
[75] Toulmin, Goodfield 1965.
[76] Mayr 1982, 175-177.
[77] Mayr 1982, 176.
[78] Sober 1993, 145-149; Hull 1999, 33; Wilson 1999, 188.
[79] Dawkins 1986, chapter 1.
[80] Mayr 1982, 260:
[81] Toulmin, Goodfield 1965, chapter 8; Panchen 1992, chapter 6.
[82] Mayr 1982, 38, 87, 304-305.
[83] Mayr 1982, 87.
[84] Mayr 1982, 87-91, 149-154.
[85] Stafleu 2019, 6.8.
[86] Rosenberg 1985, 188; Hull 1999, 33; Wilson 1999, 188.
[87] Griffiths 1999.
[88] Hull 1974, 47; Rosenberg 1985, 190-191.
[89] Hull 1974, 49; Mayr 1982, 37-43, 846.
[90] Mayr 2000, 68:
[91] Rosenberg 1985, 122-126, 211, 219.
[92] Rosenberg 1985, 211.
[93] Rosenberg 1985, 136-152, Hull 1974, 64-66.
[94] Rosenberg 1985, 152.
[95] Rosenberg 1985, 219.
[96] Dawkins 1986, 10-15.
[97] Ruse 1973, 24-31.
[98] Rensch 1968; Griffiths 1999.
[99] Ereshefsky 1992, 360; Hull 1974, chapter 3.
[100] Dawkins 1983.
[101] Darwin 1859, 459.
Encyclopaedia of relations and characters,
their evolution and history
Chapter 7
Characteristics of animal
7.1. The primary characteristic of animals
7.2. Secondary characteristics of animals
7.3. Control processes
7.4. Controlled processes
7.5. Goal-directed behaviour
Encyclopaedia of relations and characters. 7. Characteristics of animal behaviour
7.1. The primary characteristic of animals
The sixth and final relation frame for characters of natural things and processes concerns animals and their behaviour. This, too, is a typical twentieth-century subject. In the United States and the Soviet Union, especially positivistically oriented behaviorists were concerned with laboratory research of the behaviour of animals, in particular with their learning ability. Later on, in Europe ethology emerged, investigating animal behaviour in natural circumstances. I shall not discuss human psychology, which witnessed important developments during the twentieth century as well. Besides ethology and animal psychology, neurology is an important source of information for chapter 7.
Section 7.1 argues that animals are characterized by goal directed behaviour, implying the establishment of informative connections and control. Section 7.2 discusses the secondary characteristic of animals. Section 7.3 deals with the psychical processing of information, section 7.4 with controlled processes and section 7.5 with their goals.
A psychical character is a pattern of behaviour or a program, a lawful prescription. This is a scheme of fixed processes laid down in detail, with their causal connections leading to a specified goal. Behaviour has an organic basis in the nervous system and in the endocrine system (7.2), and a physical and chemical basis in signals and their processing (7.3).
No more than the preceding chapter, this inventory of animal behaviour contains anything new from a scientific point of view. Only the ordering is uncommon, as it derives from the philosophy of dynamic development. The proposed ordering intends to demonstrate that the characters studied in mathematics and science do not merely show similarities. Rather, the natural characters are mutually interlaced and tuned to each other.
For the psychic subject-subject relation, I suggest the ability to make informative and goal directed connections. Psychic control influences organic, physical, chemical, kinetic, spatial, and quantitative relations, but it does not mean their abolishment. On the contrary, each new order means an enrichment of the preceding ones. Physical interactions allow of more (and more varied) motions than the kinetic relation frame alone does. Ever more kinds of motion are possible in the organic and animal worlds. The number of organic compounds of atoms and molecules is much larger than the number of inorganic ones. Organic variation, integration, and differentiation are in the animal kingdom more evolved than in the kingdom of plants. Each new order opens and enriches the preceding ones. By making informative connections, an animal functions organically better than a plant. For this purpose, an animal applies internally its nervous system and its hormones, and externally its behaviour, sustained by its senses and motor organs.
Animals differ in important respects from plants, fungi, and bacteria. No doubt, they constitute a separate kingdom. The theory of evolution assumes that animals did not evolve from differentiated multicellular plants, but from unicellular protozoans.
According to a modern definition, animalia are multicellular: ‘An organism is an animal if it is a multicellular heterotroph with ingestive metabolism, passes through an embryonic stage called a blastula, and has an extracellular matrix containing collagen.’[1] Within the kingdom of the protista (the set consisting of all eukaryotes that do not belong to the animalia, plantae, or fungi), the unicellular protozoans like flagellates and amoebas do not form a well-defined group. The animalia probably form a monophyletic lineage, which would not be the case if the protozoans were included. Therefore, some biologists do not consider the protozoans to be animals, but others do.
In the evolutionary order, the plants may have emerged after the animals. The first fossils of multicellular animals occur in older layers than those of differentiated plants. Fungi are genetically more related to animals than to plants. Possibly, the plants branched off from the line that became the animal kingdom. If so, this branching is characterized by the encapsulation of prokaryotes evolving into chloroplasts. The distinctive property of green plants is their ability of photosynthesis, which is completely absent in animals and fungi. Another difference is the mobility of most animals in contrast to the sedentary nature of most plants. Animals lack the open growth system of many plants, the presence of growth points of undifferentiated cells, from which periodically new organs like roots or leaves grow. After a juvenile period of development, an animal becomes an adult and does not form new organs. Animal organs are much more specialized than plant organs.
If asked to state the difference, a biologist may answer that plants are autotroph and animals heterotroph. Plants achieve their food directly from their physical and chemical environment, whereas animals depend partly on plants for their food supply. David MacFarland divides organisms into producers, consumers, and decomposers. Plants produce chemical energy from solar energy. Animals consume plants or plant eaters. Fungi and bacteria decompose plant and animal remains to materials useful for plants.[2] However, fungi too depend on plants or their remains, and some plants need bacteria for the assimilation of nitrogen. Apart from that, this criterion is not very satisfactory, because it does not touch the primary, qualifying relation frames of plants and animals. It seems to be inspired by a world view reducing everything biological to physical and chemical processes. This view stresses the energy balance, metabolism, and the production of enzymes out of proportion. I believe the distinction between autotroph and heterotroph to be secondary.
Animals are primarily distinguished by their behaviour. A relational philosophy does not look for reductionist or essentialist definitions, but for qualifying relations. The most typical biotic property of all living beings, whether bacteria, fungi, plants, or animals, is the genetic relation, between organisms and between their parts, as discussed in chapter 6. Superposed on this relation, animals have psychic relations between their organs by means of their nervous system, and mutually by means of their observable behaviour. In part, this behaviour is genetically determined; in part, it is adaptable. Obviously, in particular species specific behaviour is genetically determined, because species are biotically qualified characters, if not aggregates (6.7). Different animal species can be distinguished because of their genetically determined behaviour. More differentiated animals have a complex nervous system with a larger capacity for learning and more freedom of action, than simpler animals have.
The taxonomy of the animal kingdom is mostly based on descent and on morphological and physiological similarities and differences. Its methodology hardly differs from that of the plant taxonomy. But there are examples of species that can only be distinguished because of their behaviour. When a new animal species is realized, a change of behaviour precedes changes in morphology or physiology.[3] This means that controlled behaviour plays a leading part in the formation of a new animal species. Because of the multiformity of species specific behaviour, there are far more animal species than plant species, and much less hybrids.
However, animals have a lot in common with plants and fungi, too, because their psychic character is interlaced with biotic characters. Conversely, as a tertiary characteristic some plants are tuned to animal behaviour. Flowering and fruit bearing plants have a symbiotic relation with insects transferring pollen, or with birds and mammals eating fruits and distributing indigestible seeds.
The psychically qualified character of an animal comes to the fore in its body plan (morphology) and body functions (physiology), being predisposed for its behaviour. For this purpose, animals have organs like the nervous system, hormonal glands, and sense organs, that plants and fungi lack. Animals differ from plants because of their sensitivity for each other, their aptitude to observe the environment, and their ability to learn. They are sensitive to internal stimuli and external signals. Sometimes, also plants react to external influences like sunlight. But they lack special organs for this purpose and they are not sensitive to each other or to signals. In a multicellular plant, a combination of such reactions may give rise to organized motions, for instance flowers turning to the sun. Animal movements are not primarily organized but controlled. However, control does not replace organization, but superposes it.
Each plant cell reacts to its direct surroundings, to neighbouring cells or the physical and biotic environment. A plant cell only exerts action by contact, through its membranes. Neighbouring animal cells are less rigidly connected than plant cells. There are more intercellular cavities. Animal cells and organs are informatively linked by neurons, capable of bridging quite long distances. An animal exerts action at a distance within its environment as well, by means of its sense organs, mobility, and activity.
A physical system is stable if its internal interactions are stronger than its external interactions. An organism derives its stability from maintaining its genetic identity during its lifetime (6.3). Only sexual reproduction leads to a new genetic identity. For the stability of an animal, internal control by the nervous and hormonal systems is more important than the animal’s external behaviour.
Informative goal-directed connections express the universal psychic subject-subject relation. Animals receive information from their environment, in particular from other animals, and they react upon it. Mutatis mutandis, this also applies to animal organs. Both internally and externally, an animal may be characterized as an information processor. Provisionally, I propose the following projections on the five relation frames preceding the psychic one.
a. As units of information, signals or stimuli quantitatively express the amount of information. A signal has an external source, causing a stimulus in a sensor, or an impression on a sense organ. A stimulus may have an internal or an external source. In communication technology, the unit of information is called a bit. A neuron has an input for information and an output for instructions, both in the quantitative form of one or more stimuli. The nerve cell itself processes the information.
b. A behaviour program integrates stimuli into information and instruction patterns. Neurons make connections and distribute information. By their sense organs, higher animals make observations and transfer signals bridging short or large distances. The animal’s body posture provides a spatially founded signal.
c. A net of neurons transports and amplifies information, with application of feedback. Communication between animals could be a kinetic expression of the psychic subject-subject relation.
d. Behaviour requires an irreversible causal chain from input to output, intermitted by programmed information processing. Interpretation, the mutual interaction and processing of complex information, requires a memory, the possibility to store information for a lapse of time.
e. The animal’s ability to learn, to generate new informative links, to adapt behaviour programs, may be considered a projection on the biotic subject-subject relation. Learning is an innovative process, unlearning is a consequence of ageing. In the nervous system, learning implies both making new connections between neurons and developing programs.
The psychic subject-subject relation and its five projections should be recognizable in all psychic characters. They are simulated in computers and automatized systems.
Encyclopaedia of relations and characters. 7. Characteristics of animal behaviour
7.2. Secondary characteristics of animals
Animal behaviour has an organic basis in the nervous system. Its character has a genetic foundation. ‘The study of behavior is the study of the functioning of the nervous system and must be carried out at the behavioral level, by using behavioral concepts … the output of the nervous system, manifested as perceptions, thoughts, and actions.’[4] The sense organs are specialized parts of the nervous system, from which they emerge during the development of the embryo. The nervous system controls the animal body and requires observation, registration, and processing of external and internal information. The processing of stimuli, coming from inside or outside the animal body, occurs according to a certain program. This program is partly fixed, partly adaptable because of experience. Consequently, animals react to changes in their environment much faster and more flexibly than plants do. Besides the nervous system, the whole body and its functioning are disposed to behaviour.
a. The basic element of the nervous system is the nerve cell or neuron, passing on stimuli derived from a sensor to an effector. A unicellular animal (a protozoan) has no nerve cells. Rather, it is a nerve cell, equipped with one or more sensors and effectors. ‘Protozoa, being single-cell systems … seem to be organized along principles similar to those governing the physiology of neurons … the protozoan is like a receptor cell equipped with effector organelles.’[5] An effector may be a cilium by which the animal moves. The simplest multicellular animals like sponges consist only of such cells.[6] A sponge (to the phylum Porifera belong about 10.000 species) has no nervous system, no mouth, muscles, or other organs. The cells are grouped around a channel system allowing of streaming water. Each cell is in direct contact with water. A sponge has at least 34 different cell types. The cells are organically but not psychically connected. The even more primitive Placozoa (of which only two species are known) too lack a nervous system. A nerve cell in a more differentiated animal is a psychic subject with a character of its own, spatially and functionally interlaced with the nervous system and the rest of the body. The protozoans and the sponges as well as the neurons in higher animals may be considered to be primarily psychically and secondarily quantitatively characterized thing-like subjects. For all multicellular animals, the neurons and their functioning (inclusive their neurochemistry) are strikingly similar, with only subordinate differences between vertebrates and invertebrates.[7]
b. In a multicellular nervous system, a neuron usually consists of a number of dendrites, the cell body and the axon ending in a number of synapses. Each synapse connects to a dendrite or the cell body of another cell. (There are two kinds of nerve cells, neurons that are connected to each other besides glial cells, supporting the activity of the neurons. In the human brain, glial cells are more numerous than neurons, but I shall only discuss neurons.) A dendrite collects information from a sensor or from another neuron. After processing, the cell body transfers the information via the axon and the synapses to other neurons, or to a muscle or a gland. The dendrites collect the input of information that is processed in the cell body. The axon transports the output, the processed information that the synapses transfer to other cells. In this way, neurons form a network, an organ typical for all animals except the most primitive ones like protozoans and sponges. Neurons are distinct from other cells. The other cells may be sensitive for instructions derived from neurons, but they are unable to generate or process stimuli themselves. The neurons make psychic connections between each other and to other cells, sometimes bridging a long distance. The network’s character is primarily psychically qualified, secondarily spatially founded. One or more neurons contain a program that integrates simultaneously received stimuli and processes them into a co-ordinated instruction.
Jellyfish, sea anemones, corals, and hydrozoans belong to the phylum of cnidarians (now about 10,000 species, but in the early Cambrium much more numerous; whether the pre-Cambrian Ediacaran fauna mostly consisted of cnidarians is disputed[8]). They have a net of neurons but not a central nervous system. The net functions mostly as a connecting system of more or less independent neurons. The neurons inform each other about food and danger, but they do not constitute a common behaviour program. The body plan of cnidarians is more specialized than that of sponges. Whereas the sponges are asymmetrical, the cnidarians have an axial symmetry. They cannot move themselves. Sea anemones and corals are sedentary, whereas jellyfish are moved by sea currents. The nerve net of cnidarians can only detect food and danger. It leads to activating or contracting of tentacles, and to contracting or relaxing of the body. However, even if a jellyfish is a primitive animal, it appears to be more complex than many plants.
c. In the nervous system, signals follow different pathways. Each signal has one or more addresses, corresponding to differentiated functions.
The behaviour of animals displays several levels of complexity. Sensorial, central, and motor mechanisms are distinguished as basic units of behaviour. Often these units correspond with structures in the nervous system and sometimes they are even recognizable in a single neuron. ‘There may often be a close correspondence between systems defined in structural and functional terms, but this is by no means always the case, and it is very easy for confusion to arise.’[9] Only in a net, neurons can differentiate and integrate. Now the three mentioned functions are localized respectively in the sense organs, the central nervous system, and in specialized muscles.
The simplest differentiated net consists of two neurons, one specialized as a sensor, the other as a motor neuron. The synapses of a motor neuron stimulate a muscle to contract. In between, several inter-neurons may be operative, in charge of the transport, distribution, or amplification of stimuli. In the knee reflex, two circuits are operational, because two muscles counteract, the one stretching, the other bending the knee. The two circuits have a sensor neuron in common, sensitive to a pat on the knee. In the first circuit, the sensor neuron sends a stimulus to the motor neuron instructing the first muscle to contract. In the second circuit, a stimulus first travels to the inter-neuron, blocking the motor neuron of the other muscle such that it relaxes.
A differentiated nervous system displays a typical left-right symmetry, with many consequences for the body plan of any animal having a head and a tail. In contrast with the asymmetric sponges and axially symmetric cnidarians, bilateral animals can move independently, usually with the head in front. The bilateral nervous system allowing of information transport is needed to control the motion. The more differentiated the nervous system is, the faster and more variable an animal is able to move. In the head, the mouth and the most important junction (ganglion) of the nerve net are located, in the tail the anus. From the head to the tail stretches a longitudinal chain of neurons, branching out in a net. Sometimes there is a connected pair of such chains, like a ladder. Apparently, these animals are primarily psychically and secondarily kinetically characterized. At this level, real sense organs and a central brain are not present yet, but there are specialized sensors, sensitive for light, touch, temperature, etc.
The simplest bilateral animals are flatworms, having hardly more than a net of neurons without ganglions. A flatworm has two light sensitive sensors in its head enabling it to orient itself with respect to a source of light. Round worms and snails have ganglions co-ordinating information derived from different cells into a common behaviour program. Their reaction speed is larger and their behaviour repertoire is more elaborate than those of flatworms, but considerably less than those of arthropods, e.g.
Progressing differentiation of the nervous system leads to an increasing diversity of animal species in the parallel-evolved phyla of invertebrates, arthropods, and vertebrates. Besides the nervous system, the behaviour, the body plan, and the body functions display an increasing complexity, integration, and differentiation. In various phyla, the evolution of the body plan and the body functions, that of the nervous system and the behaviour, have influenced each other strongly.
Remarkable is an increasing internalization, starting with a stomach.[10] After the conception, every multicellular animal starts its development by forming a blastula, a hollow ball of cells. A sponge is not much more than such a ball. Sponges and cnidarians have only one cavity, with an opening that is mouth and anus simultaneously. The cavity wall is at most two cells thick, such that each cell has direct contact with the environment. Animals with a differentiated nervous system have an internal environment, in cavities which walls are several cells thick. Between neighbouring cells, there are intercellular cavities.
In differentiated animals, biologists distinguish four kinds of tissues (with their functions): epithelium (the external surface of the body and its organs, taking care of lining, transport, secretion, and absorption); connective tissue (support, strength, and elasticity); muscle tissue (movement and transport); and nervous tissue (information, synthesis, communication, and control).[11] Vertebrates have an internal skeleton and internal organs like blood vessels, kidneys, liver, and lungs. These may be distinguished according to their ethological functions: information and control (nervous system and endocrine system); protection, support, and motion (skin, skeleton, and muscles); reproduction (reproductive organs); food (digestion and respiration organs); transport and defence (blood, the hart, the blood-vessels, the lymph nodes, the immune system); secretion, water and salts balance (kidneys, bladder, and guts).[12] As far as a plant has differentiated organs (leaves, flowers, roots, the bark), these are typically peripheral, with an outward direction to the acquisition of food and reproduction. Animal organs are internalized. This is compensated for by the formation of specific behaviour organs directed outward. These are the movement organs like feet or fins; catching organs like a beak or the hands; fighting organs like horns or nails; and in particular the sense organs.
d. Manipulation of the environment requires a central nervous system and sense organs. The most interesting capacities of the nervous system emerge from the mutual interaction of neurons. The storage and processing of information requires a central nervous system. Reflexes are usually controlled outside this centre. The peripheral nervous system takes care of the transport of information to the centre and of instructions from the centre to muscles and glands. It is therefore secondarily kinetically characterized. The physically founded storage and processing of information requires specialization of groups of cells in the centre, each with its own programs.
In particular the sensors are grouped into specialized sense organs allowing of the formation of images. The best known example is the eye that in many kinds of animals uses light sensitivity to produce an image of the surroundings. In 1604, Johann Kepler demonstrated how in the human eye image formation as a physical process proceeds, thanks to the presence of a lens. In all vertebrates and octopuses, it works in the same way. Other animals, e.g., insects, do not have an eye lens. In vertebrates, the image formation occurs at the backside of the retina, in squids at the front side. The visual image formation does not end at the retina. An important part of the brain is involved in the psychic part of imaging. Besides visual, an image may be tactile or auditive, but now there is no preceding physical image formation comparable to the visual one in the eye.
On this level, chains of successive acts occur, in which different organs and organ systems co-operate, such as in food gathering, reproduction, movement, or fighting. Animals have manipulative organs, like teeth and claws. Animals with a central nervous system are primarily psychically and secondarily physically characterized.
e. In the highest animals’ neocortex, the brain is superposed on the autonomous nervous system. In the latter, the same processes occur as in the entire nervous system of lower animals. With respect to the construction of their nervous system and their behaviour, octopuses, birds, and mammals are comparable. Within the nervous system, a division appears between the routine control of the body and less routine tasks. The neocortex can be distinguished from the ‘old brain’, including the limbic system, that controls processes also occurring in lower animals. In primates, there is a further division of labour between the global, spatio-temporal right half and the more concentrated, analytical and serial left half, which in human beings harbours the speech centre. In the neocortex, the learning capacity of animals is concentrated. The difference between old and new brains, or between left and right half, is not rigorous. It points to the phenomenon that new programs always make use of existing ones.
Animals are capable of learning. Learned behaviour called habituation, i.e. an adaptive change in the program caused by experience, occurs both in higher and in lower animals. During habituation a new program emerges that the animal applies in a stimulus-reflex relation. The reverse is dehabituation. A stronger form is sensitivation, learning to be alert for new stimuli.
Instrumental learning, based on trial and error, is biotically founded. It requires imagination besides a sense for cause-effect relations. Only the highest animals are able to learn by experiment (experimental trial and error), in which the animal’s attention is directed to the effect of its activities, to the problem to be solved. Sometimes an AH-Erlebnis occurs. Whether this should be considered insightful learning is controversial.[13]
Sometimes animals learn tricks from each other. Singing birds learn the details of their songs from their parents, sometimes prenatal. Some groups display initiation behaviour. In the laboratory, imitation learning is the imitation of a new or improbable activity or expression for which no instinctive disposition exists. It is a consequence of observing that another animal of the same or a different species performs an act for which it is rewarded.
Mammals, birds, and octopuses have programs that require to make choices. They apply these programs in the exploration of their environment and in playing. Initially, the animal makes an arbitrary choice, but it remembers its choices and their effects. By changing its programs, the animal influences its later choices. The new circumstances need not be the same as in the earlier case, but there must be some recognizable similarity.
Starting from the lowest level, each psychic character has dispositions to interlace with characters at a higher level. Neurons have the disposition to become interlaced into a net that allows of differentiation. The differentiated net may form a central nervous system, at the highest level divided into an autonomous system and a brain. These levels constitute a hierarchy, comparable to the quantum ladder (5.3).
On the one hand, the phenomenon of character interlacement means that the characters having different secondary foundations remain recognizable, on the other hand, it implies an amount of adaptation. A neuron in a net is not the same as a unicellular animal, but it displays sufficient similarities to assume that they belong primarily and secondarily to the same character type. Only the tertiary characteristic is different, because a unicellular protozoan cannot become part of a net of neurons, and because it has sensors and flagellates instead of dendrites and an axon.
The relation between ‘old’ and ‘new’ brains can be understood as a character interlacement as well. In particular, instinctive processes and states like emotions that mammals and birds share with fish, amphibians, and reptiles are located in the limbic system, the ‘reptilian brain’. Hence, the difference between the limbic system in the higher animals and the central nervous system in the lower animals is tertiary, whereas the difference of both with the neocortex is secondary. This character interlacement is not only apparent in the structure of the nervous system. Both the programming and the psychic functioning of the nervous system display an analogous characteristic hierarchy.
Encyclopaedia of relations and characters. 7. Characteristics of animal behaviour
7.3. Control processes
Animals are sensitive for their own body, for each other and for their physical and biotic environment. By observing, an animal as a subject establishes relations with its environment, being the object of its observation. In the subjective observation space (which is not necessarily Euclidean), an animal observes a number of objects in their mutual relationships, dependent on the animal’s needs. Motion of an object is observed against the background of the observation space. Between some changes (as far as of the animal’s interest), the animal makes a causal connection. Together with its own position and its memory, the observation space constitutes the subjective world of experience of an animal, to be distinguished from its objective environment.
Organically, sensors or sense organs bring about observation. The gathering of information is followed by co-ordination, transfer, and processing into instructions for behaviour, via the nervous and endocrine systems. Together, this constitutes a chain of information processing.
I shall distinguish between control processes (7.3), controlled processes (7.4), and psychically qualified behaviour (7.5), each having their specific characters. For information processing, projections on the quantitative up to the biotic relation frames can be indicated, as follows. Although control on the level of genes is very important for animal development, I shall not discuss it.
a. The simplest form of control is to switch on or off a programmed pattern of behaviour, like an electric appliance is put into operation by an on/off switch. Psychology calls this the release after the reception of an appropriate signal. Each signal and each stimulus must surpass a threshold value in order to have effect. Mathematically, a step function represents the transition from one state to the other. Its derivative is the delta function describing a pulse, the physical expression of a stimulus or signal, kinetically represented by a wave packet (chapter 4). In a neuron, a stimulus has the character of a biotically organized chemical process, called an action potential, in which specific molecules (neurotransmitters) play a part. Hence, the objective psychical character of a signal or a stimulus is interlaced with various other characters.
The simplest form of behaviour consists of a direct relation between a stimulus and a response (e.g., a reflex). It depends on a specific stimulus that switches the program on or off. (The program itself may be quite complex). Often, only the output is called behaviour, but there is an unbreakable connection between input, program, and output. Hence it appears better to consider the whole as a kind of behaviour.
Sometimes a program as a whole is out of operation, such that it is insensitive for a stimulus or signal that should activate it. Hormonal action has the effect that animals are sexually active only during certain periods. Hormones determine the difference between the behaviour of male and female specimens of the same species. Sometimes, female animals display male behaviour (and conversely), if treated with inappropriate hormones. Being switched on or off by hormones, sexual behaviour programs appear to be available to both genders.
b. A spatially founded system of connected neurons receives simultaneous stimuli from various directions and co-ordinates instructions at different positions. The integration of stimuli and reflexes does not require a real memory. ‘Immediate memory’ is almost photographic and it lasts only a few seconds. It allows of the recognition of patterns and the surroundings. The reaction speed is low. Recognition of a spatial pattern requires contrast, the segregation of the observed figure from its background.
Often, a program requires more information than provided by a single signal. The observation of a partner stimulates mating behaviour, whereas the presence of a rival inhibits it. Moreover, internal motivation is required. Aggressive behaviour against a rival only occurs if both animals are in a reproductive phase. Besides stimulating, a stimulus may act relaxing, blocking, numbing, or paralysing.
Via the dendrites, several incoming pulses activate simultaneously the psychic program localized in a single cell body or a group of co-operating neurons. Some pulses act stimulating, others inhibiting. In this case, only the integration of stimuli into a pattern produces an instruction that may be a co-ordinated pattern of mutually related activities as well. Each neuron in a net co-ordinates the information received in the form of stimuli through its dendrites. It distributes the processed information via the axon and synapses to various addresses. Various mechanisms can be combined into more complex behaviour systems, like hunting, eating, sexual or aggressive behaviour. A behaviour system describes the organization of sensorial, central, and motor mechanisms being displayed as a whole in certain situations. In electronics, such a system is called an integrated circuit, in computers it is an independent program.
c. Each neuron transports information via its axon to other cells. In a differentiated nervous system, transport and amplification of information occurs in steps, mediating between the reception of signals and the exertion of instructions. As discussed so far, information exists as a single pulse or a co-ordinated set of pulses. However, the information may consist of a succession of pulses as well. The short-term memory (having duration of 10-15 minutes) allows the animal to observe signals arriving successively instead of simultaneously. The stored information is deleted as soon as the activity concerned is completed.
If an observed object moves, it changes its position with respect to its background, enhancing its contrast. Hence, with respect to its background, a moving object is easier to be observed than a stationary object. Likewise, an animal enhances the visibility of an object by moving its eyes.
Amplification of stimuli makes negative feedback possible. This control process requires a sensor detecting a deviation from a prescribed value (the set point) for a magnitude like temperature. Transformed into a signal, the deviation is amplified and starts a process that counters the detected deviation. For a feedback process, no memory is required.
d. Psychologists distinguish sensation from perception. Sensations are the basic elements of experience, representing information. Perception is the interpretation process of sensorial information. ‘Sensations are the basic data of the senses … Perception is a process of interpretation of sensory information in the light of experience and of unconscious inference’.[14] Perception is a new phase between the reception of signals and the exertion of instructions. It allows the animal to observe changes in its environment, other than motions for which a short-term memory is sufficient.
A physically differentiated nervous system may include chemical, mechanical, thermal, optic, acoustic, electric, and magnetic sensors, besides sensors sensitive for gravity or moisture. The sense organs distinguish signals of a specific nature and integrate these into an image, that may be visual, tactile, or auditive, or a combination of these.
An animal having sense organs is capable of forming an image of its environment and storing it in its memory. It is able to make a perceptive connection between cause and effect.[15] This does not mean a conceptual insight into the abstract phenomenon of cause and effect - that is reserved to human beings. It concerns concrete causal relations, with respect to the satisfaction of the animal’s needs of food, safety, or sex. For instance, an animal learns fast to avoid sick making food. An animal is able to foresee the effects of its behaviour, for the best predictor of an event is its cause.
Imaging allows an animal to get an impression of its changing environment in relation to the state of its body. The animal stores the image during some time in its memory, in order to compare it with an image formed at an earlier or later time. This is no one-way traffic. Observation occurs according to a program that is partly genetically determined, partly shaped from earlier experiences, and partly adapts itself to the situation of the moment. Observation is selective: an animal only sees what it needs in order to function adequately.
In observation, recollection, and recognition, comparison with past situations as well as knowledge and expectations play a part. If an animal recognizes or remembers an object, this gives rise to a latent or active readiness to react adequately. Not every circuit reacts to a single stimulus switching it on or off. Stimuli derived from a higher program may control a circuit in more detail. This is only possible in a nervous system having differentiation and perception besides co-ordination, and allowing of transport and storage of information. The long-term memory is located in the central nervous system, requiring specialized areas coupled to the corresponding sense organs. (The distinction between immediate, short, and long term memory does not concern their duration, but their function - on which the duration depends - as described in this section.)
Recognition based on image formation does not occur according to the (logical) distinction of similarities and differences, but holistic, as a totality, in the form of a Gestalt. Recollection, recognition, and expectation, respectively concerning the past, the present, and the future, give rise to emotions like joy, sorrow, anger, or fear. Probably, in animals it concerns always ‘object bound’ emotions, e.g., the fear of an other animal. In human beings one finds anxiety in the I-self relation as well. Images psychically interact with each other or with inborn programs. Emotions act like forces in psychic processes, in which both the nervous and the endocrine system play their parts. Sometimes the cause of behaviour is an internal urge or driving force (the original meaning of ‘instinct’). This waits to become operative as a whole until the animal arrives at the appropriate physiological state and the suitable environment to utter its instinct.
Imaging allows an animal to control its behaviour by its expectations, by anticipations, by ‘feedforward’. (‘The term feedforward … is used for situations in which the feedback consequences of behaviour are anticipated and appropriate action is taken to forestall deviations in physiological state.’[16]) The intended goal controls the process. Animals drink not only to lessen their thirst, but also to prevent thirst. Taking into account observations and expectations, animals adapt the set point in a feedback system. (In a thermostat, the desired temperature is called the ‘set point’. In homeostasis, the set point is constant, in an active control process the set point is adaptable.)
e. Fantasy or imagination is more than processing of information. It is innovative generation of information about situations which are not realized yet. It allows higher animals to anticipate on expected situations, to make choices, to solve problems, and to learn from these. It requires a rather strongly developed brain able to generate information, in order to allow of choosing between various possibilities. At this level, emotions like satisfaction and disappointment occur, because of agreement or disagreement between expectation and reality. In particular young mammals express curiosity and display playful behaviour.
Animals control their learning activity by directing their attention. Attention for aspects of the outer world depends on the environment and on the animal’s internal state. A well-known form of learning in a new born animal is imprinting, for instance the identification of its parents. Sometimes, comparing of experience leads to the adaptation of behaviour programs, to learning based on recognized experiences. Associative learning means the changing of behaviour programs by connecting various experiences. In a conditioned reflex, an animal makes connections between various signals. Repetition of similar or comparable signals gives a learning effect known as reinforcement (amplification by repetition).[17]
Encyclopaedia of relations and characters. 7. Characteristics of animal behaviour
7.4. Controlled processes
All controlled processes are organized processes as well, and subject to physical and chemical laws. In an organized process (6.2), enzymes are operative, by lowering or heightening energy barriers. Hormones play a comparable stimulating or inhibiting part. Technologists speak of control if a process having its own source of energy influences another process, having a different energy source.
a. Like an electron, a stimulus corresponds to a kinetic wave packet, whereas the transport and processing of a pulse have a physical nature. Transport of information occurs by means of an electric current or a chemical process in a nerve. The distribution of hormones from the producing gland to some organ, too, constitutes information transport. In invertebrates, the stimulus has often the form of an electric pulse, in vertebrates it is a chemical pulse (an action potential). Whereas the neurons produce most stimuli, external signals induce stimuli as well. The induction and transport of stimuli happens in a characteristic way only occurring in animals. However, the accompanying characters of a physical pulse and a kinetic wave packet are fairly well recognizable.
b. The body plan of an animal is designed for its behaviour. Complex behaviour requires co-ordinated control by an integrated circuit in the nervous system, usually combined with the endocrine system. A special form of co-ordinated behaviour follows an alarm. This brings the whole body in a state of alertness, sometimes after the production of a shot of adrenalin. The animal’s body posture expresses its emotive state.
c. Controlled motions recognizable as walking, swimming, or flying, are evidently different from physical or biotic movements, even without specifying their goal. Psychically qualified behaviour is recognizable because of its goal, like hunting or grazing. One of the most important forms of animal behaviour is motion. For a long time, the possibility to move itself was considered the decisive characteristic of animals. Crawling, walking, springing, swimming, and flying are characteristic movements that would be impossible without control by feedback. The animal body is predisposed to controlled motion, such that from fossils it can be established how now extinct animals were moving. Not all movements are intended to displacement, they may have other functions. Catching has a function in the achievement of food, chewing in processing it. Animal motions are possible because animal cells are not rigidly but flexibly connected, having intercellular cavities (unlike plant cells). Muscular tissues are characteristically developed to make moving possible.
Many of the mentioned movements are periodic, consisting of a rhythmic repetition of separate movements.[18] Many animals have an internal psychic clock regulating movements like the heartbeats or the respiration. The circadian clock (circa die = about a day) tunes organic processes to the cycle of day and night. Other clocks are tuned to the seasons (e.g., fertility), and some coastal animals have a rhythm corresponding to the tides.
The more complicated an animal is, the more important the control of its internal environment. Homeostasis is a characteristic process controlled by feedback. Many animals keep their temperature constant within narrow limits. The same applies to other physical and chemical parameters.
Animals with a central nervous system and specialized sense organs control their external behaviour by means of feedback. They are able to react fast and adequately to changes in their environment.
d. In particular in higher animals, the nervous system controls almost all processes in some way. Respiration, blood circulation, metabolism and the operation of the glands would not operate without control. The animal controls its internal environment by its nervous system, that also controls the transport of gases in respiration and of more or less dissolved materials in the guts and the blood vessels. Whereas in plants metabolism is an organized process, in animals it is controlled as well.
Internal processes are usually automatically controlled, but in specific actions, an animal can accelerate or decelerate them or influence them in other ways. Animals with sense organs also control external processes like the acquisition of food.
The development of a differentiated animal from embryo to the adult form is a controlled biotic process. The growth of an animal starting from its conception is influenced by the simultaneously developing nervous system. In mammals before the birth, there is an interaction with the mother, via the placenta. Emotions induced by the observation of a partner or a rival control mating behaviour.
e. Many forms of behaviour, such as mating, are genetically programmed. Through the genes, they are transferred from generation to generation. They are stereotype, progressing according to a fixed action pattern. The programming of other forms of behaviour occurs during the individual’s development after its conception. Earlier, I observed that the genome should not be considered a blueprint (6.2). Even in multicellular differentiated plants, the realization of the natural design during the growth is not exclusively determined by the genome, but by the environment of the dividing cell as well. The tissue to which the cell belongs determines in part the phenotype of the new cells. Besides, in animal development during the growth the nervous and endocrine systems play a controlling part. While the nervous system grows, it controls the simultaneous development of the sense organs and of other typically animal organs like the heart or the liver.
Besides the animal body including the nervous system, the programs in the nervous system are genetically determined, at least in part. Partly they are developed during growth. Moreover, animals are capable of changing their programs, to learn from information acquired from their environment. Finally, the exertion of a program depends on information received by the program from elsewhere.
Behaviour programs consist of these four components. Hence, there is no dualism of genetically determined and learned behaviour. ‘We cannot dichotomize mammalian behaviour into learned and unlearned …’ Daniel Lehrman and others criticize Konrad Lorenz’s definition of instinctive behaviour to be genetically determined (in contrast to learned behaviour).[19] Each kind of behaviour has inherited, learned and environmental components.[20] ‘The innate/learnt type of dichotomy can lead to the ignoring of important environmental influences on development.’[21] Behaviour emerges as a relation of the animal with its environment, as adaptation in a short or a long time. First, by natural selection a population adapts the genetic component to a suitable niche. Next, an individual animal actualizes this adaptation during its development from embryo to adult. Third, its learning capacity enables the individual to adapt its behaviour to its environment much faster than would be possible by natural selection or growth. Fourth, the input of data in the program allows the animal to adapt its behaviour to the situation of the moment.
Encyclopaedia of relations and characters. 7. Characteristics of animal behaviour
7.5. Goal-directed behaviour
Behaviour consists of psychically qualified events and processes. It emerges as a chain from stimulus or observation via information processing to response. It is always goal-directed, but it is not goal-conscious, intentional, or deliberate, these concepts being applicable to human behaviour only. Since the eighteenth century, physics has expelled goal-directedness, but the psychic order is no more reducible to the physical order than the biotic one. Behaviour is goal-directed and its goal is the object of subjective behaviour.
Since Aristotle, there is a dualism of causal and teleological explanations (‘proximate’ versus ‘ultimate’ causes). By ‘teleology’ is understood both the (biotic) function and (psychic) goal.[22] I restrict goal-directedness to behaviour. Goal-directed behaviour always has a function, but a biotic function is not always goal-directed. Function and purpose presuppose (physical) causality, but cannot be considered causes themselves. Ernest Nagel associates teleological explanations with ‘… the doctrine that goals or ends of activity are dynamic agents in their own realizations … they are assumed to invoke purposes or end-in-views as causal factors in natural processes.’[23] In order to prevent this association, I shall avoid the term teleology (or teleonomy[24]). The goal being the object of animal behaviour cannot be a ‘dynamic agent’. Only the animal itself as a psychic subject pursuing a goal is an agent of behaviour. This is in no way at variance with physical laws.
Often an animal’s behaviour is directed to that of an other animal. In that case, besides a subject-object relation, a subject-subject relation is involved. Animal behaviour is observable, both to people and to animals. By hiding, an animal tries to withdraw from being observed. Threatening and courting have the function to be observed. This occurs selectively, animal behaviour is always directed to a specific goal. Courting only impresses members of the same species.
According to the theory of characters various types of behaviour are to be expected, based on projections of the psychic relation frame onto the preceding ones. It has been established that many animals are able to recognize general relations in a restricted sense. These relations concern small numbers (up to 5), spatial dimensions and patterns in the animals’ environment, motions and changes in their niche, causality with respect to their own behaviour and biotic relations within their own population.
For human beings, activity is not merely goal-directed, but goal-conscious as well. In the following overview, I shall compare animal with human behaviour.
a. A neuron transforms stimuli coming from a sensor into an instruction for an effector, e.g. a muscle or a gland. Muscles enable the animal’s internal and external movements. The glands secrete materials protecting the body’s health or alerting the animal or serving its communication with other animals. The direct stimulus-response relation occurs already in protozoans and sponges. The reflex, being the direct reaction of a single cell, organ, or organ system to a stimulus, is the simplest form of behaviour. It may be considered the unit of behaviour. Reflexes are always direct, goal-directed, and adapted to the immediate needs of the animal. Whereas complex behaviour is a psychically qualified process, a reflex may be considered a psychically qualified event.
Often, a higher animal releases its genetically determined behaviour (fixed action pattern) after a single specific stimulus, a sign stimulus or releaser. If there is a direct relation between stimulus and response, the goal of a fixed action pattern is the response itself, for instance the evasion of immediate danger.
People, too, display many kinds of reflexes. More than animals, they are able to learn certain action patterns, exerting them more or less ‘automatically’. For instance, while cycling or driving a car, people react in a reflexive way to changes in their environment.
Human beings and animals are sensitive for internal and external states like hunger, thirst, cold, or tiredness. Such psychically experienced states are quantitatively determined. An animal can be more or less hungry, thirsty, or tired, feeling more or less stimulated or motivated to acting. The satisfaction of needs is accomplished by complex behaviour. Taken together, animals apply a broad scale of food sources. Animals of a certain species restrict themselves to a specific source of food, characterizing their behaviour. In contrast, human beings produce, prepare, and vary their food. People do not have a genetically determined ecological niche. Far more than animals, they can adapt themselves to circumstances, and change circumstances according to their needs.
Contrary to the animals themselves, scientists analyse the quantitative aspect of behaviour by a balance of costs and benefits.[25] A positive cost-benefit relation is appropriate behaviour and favours natural selection. Behaviour always costs energy and sometimes gains energy. Behaviour involves taking risks. Some kinds of behaviour exclude others. The alternation of characteristic behaviour like hunting, eating, drinking, resting, and secreting depends on a trade-off of the effects of various forms of behaviour.[26] People, too, deliberate in this way, conscious or subconscious.
Animals of the same species may form a homogeneous aggregate like a breeding colony, an ants’ or bees’ nest, a herd of mammals, a shoal of fish, or a swarm of birds. Such an aggregate is a psychically qualified and biotically founded community, if the animals stay together by communicating with each other, or if the group reacts collectively to signals. (A population of animals as a gene pool is biotically qualified, but mating behaviour is a characteristic psychical subject-subject relation.) Human beings form numerous communities qualified by relation frames other than the psychic one.
The study of animals living in groups is called ‘sociobiology’[27]. For quite some time, sociobiology has been controversial as far as its results were extrapolated to human behaviour.[28] Sociobiology was accused of ‘genetic determinism’, i.e. the view that human behaviour is mostly or entirely genetically determined.[29]
b. An ecosystem is a biotically qualified heterogeneous aggregate of organisms (6.5). The environment of a population of animals, its Umwelt, is psychically determined by the presence of other animals, biotically by the presence of plants, fungi, and bacteria, and by physical and chemical conditions as well. Each animal treats its environment in a characteristic way. In a biotope, animals of different species recognize, attract or avoid each other. The predator-prey relation and parasitism are characteristic examples. The posture of an animal is a spatial expression of its state controlled by its emotions, but it has a goal as well, e.g. to court, to threaten, to warn, or to hide. Characteristic spatially founded types of behaviour are orientation, acclimatization, and defending a territory.
The Umwelt and the horizon of experience of a population of animals are restricted by their direct needs of food, safety, and reproduction. Animals do not transcend their Umwelt. Only human beings are aware of the cosmos, the coherence of reality transcending the biotic and psychic world of animals.
c. The movements of animals are often very characteristic: resting, sleeping, breathing, displacing, cleaning, flying, reconnoitring, pursuing, or hunting. On a large scale, the migration of birds, fish, and turtles are typical motions. Usually the goal is easily recognizable. An animal does not move aimlessly. Many animal movements are only explainable by assuming that the animals observe each other. In particular animals recognize each other’s characteristic movements. Human motions are far less stereotype than those of animals, and do not always concern biotic and psychic needs.
Communication is behaviour of an individual (the sender) influencing the behaviour of another individual (the receiver).[30] In communication, structuralists recognize the following six elements: the transmitter, the receiver, the message from transmitter to receiver, the shared code that makes the message understandable, the medium, and the context (the environment) to which the message refers. It consists of a recognizable signal, whether electric or chemical (by pheromones), visual, auditive, or tactile. It is a detail of something that a receiver may observe and it functions as a trigger for the behaviour of the receiver. Communication is most important if it concerns mating and reproduction, but it occurs also in situations of danger. Ants, bees, and other animals are capable of informing each other about the presence of food. Higher animals communicate their feelings by their body posture and body motions (‘body language’).
A signal has an objective function in the communication between animals if the sender’s aim is to influence the behaviour of the receiver. A signal is a striking detail (a specific sound or a red spot, the smell of urine or a posture), meant to draw attention. It should surpass the noise generated by the environment. Many signals are exclusively directed to members of the same species, in mating behaviour or care for the offspring, in territory defence and scaring of rivals. Animal communication is species specific and stereotype. It is restricted to at most several tens of signals. In particular between predators and prey, one finds deceptive communication. As a warning for danger, sound is better suited than visual signals. Smelling plays an important part in territory behaviour. Impressive visual sex characteristics like the antlers of an elk or the tails of a peacock have mostly a signal value.
A signal in animal communication is a concrete striking detail. Only human communication makes use of symbols, having meaning each apart or in combination. Whereas animal signals always directly refer to reality, human symbols also (even mainly) refer to each other. A grammar consists of rules for the inter-human use of language, determining largely the character of a language. ‘Animal communication signals are not true language because animals do not use signals as symbols that can take the place of their referent and because they do not string signals together to form novel sentences.’[31]
d. Often, animal behaviour can be projected on cause-effect relations. Higher animals are sensitive for these relations, whereas human beings have insight in them. Sensory observation, image formation, manipulations, emotions, and conflicts are related forms of behaviour.
The senses allow an animal of forming an image of its environment in order to compare it with images stored in its memory. This enables an animal having the appropriate organs to manipulate its environment, e.g. by burrowing a hole. Characteristic is the building of nests by birds, ants, and bees, and the building of dams by beavers. These activities are genetically determined, hardly adaptable to the environment
The formative activity of animals often results in the production of individual objects like a bird’s nest. Plants are producers as well, e.g. of wood displaying its typical cell structure even after the death of the plant. The atmosphere consisting of nearly 20% oxygen is a product of ages of organic activity. In addition, animals produce manure. From the viewpoint of the producing plant or animal, these are by-products, achieving a relatively independent existence after secretion by some plant or animal. In this respect, wood and manure differ obviously from an individual object like a bird’s nest.
A nest has primarily a physical character and is secondarily spatially founded, but its tertiary biotic and psychic dispositions are more relevant. It is produced with a purpose. Its structure is recognizable as belonging to a certain species. The nest of a blackbird differs characteristically from the nest of a robin. However, the nest itself does not live or behave. It is not a subject in biotic and psychic relations, but an object. It is a subject in physical relations, but these do not determine its character. It is an individual object, characteristic for the animals that produce it, fish, birds, mammals, insects, and spiders. The construction follows from a pattern that is inborn to the animal. Usually, the animal’s behaviour during the construction of its nest is very stereotype. Only higher animals are sometimes capable of adapting it to the circumstances. The tertiary psychic characteristic of a nest, its purpose, dominates its primary physical character and its secondary spatial shape.
Manipulating the environment concerns a subject-object relation. The mutual competition, in particular the trial of strength between rivals, may be considered a physically founded subject-subject relation. Both are species-specific and stereotype. Stereotype animal behaviour contrasts with the freedom of human activity, for which human beings are consequently responsible.
e. Much animal behaviour has a biotic function, like reproduction and survival of the species. Animals are sensitive for genetic relations. Whether protozoans experience each other is difficult to establish, but their mating behaviour makes it likely. The courting and mating behaviour of higher animals is sometimes strongly ritualized and stereotype. It is both observable and meant to be observed. It has an important function in the natural selection based on sexual preferences.[32] The body plan, in particular the sexual dimorphy, is tuned to this behaviour.
Mating behaviour and care for the offspring are psychically qualified and biotically founded types of behaviour. Animals are sensitive to the members of their species, distinguishing between the sexes, rivals, and offspring. For biotically founded behaviour, the mutual communication between animals is important. Sexually mature animals excrete recognizable scents. In herds, families, or colonies, a rank order with corresponding behaviour is observable. An animal’s rank determines its chance of reproduction.
Human mating behaviour is cultivated, increasing its importance. People distinguish themselves from animals by their sense of shame, one reason to cover themselves with clothing. The primary and secondary sex characteristics are both hidden and shown, in a playful ritual that is culturally determined, having many variations (11.1). Human sexuality is not exclusively directed to biotic and psychic needs and inborn sexual differences. It is expressed in many kinds of human behaviour.
The ability to learn is genetically determined and differs characteristically from species to species. Every animal is the smartest for the ecological niche in which it lives. Its ability to learn changes during its development. In birds and mammals, learning takes place already during the prenatal phase. In the juvenile phase, animals display curiosity, a tendency to reconnoitre the environment and their own capacities, e.g. by playing (acting as if). Usually, a young animal has more learning capability than an adult specimen.
The capacity of learning is hereditary and species specific, but what an animal learns is not heritable. The content of the animal’s learning belongs to its individual experience. Sometimes, an animal is able to transfer its experiences to members of its population.
The genetic identity of a plant or animal is primarily determined by the individual configuration of its genes. The identity is objectively laid down in the configuration of the DNA molecule, equal in all cells of the organism. Only sexual reproduction changes the genetic configuration, but then a new individual comes into existence. In contrast, the identity of an animal is not exclusively laid down in its genetic identity. An animal changes because of its individual experience, because of what it learns. By changing its experience (by memorizing as well as forgetting), the animal itself changes, developing its identity. Even if two animals have the same genetic identity (think of clones or monozygotic twins), they will develop divergent psychic identities, having different experiences. In the nervous system, learning increases the number of connections between neurons and between programs.
The individual variation in the behaviour of animals of the same species or of a specified population can often be statistically expressed. The statistical spread is caused by the variation in their individual possibilities (inborn, learned, or determined by circumstances), as far as it is not caused by measurement inaccuracies. When the statistics displays a maximum (for instance, in the case of a Gauss or Poincaré distribution), the behaviour corresponding to the maximum is called ‘normal’. Behaviour that deviates strongly from the maximum value is called ‘abnormal’. This use of the word normal is not related to norms. However, these statistics can be helpful in finding law conformities, in particular if comparison between various species reveals corresponding statistics.
Their learning capacity implies that animals are able to recognize signals or patterns, and to react by adapting their behaviour programs. This means that animals in concrete situations have a sense of regularity. This sense is not comparable to the knowledge of and insight into the universal law conformity that humanity has achieved laboriously. Still, it should not be underestimated. The sense of regularity shared by human beings and animals is a condition for the insight into lawfulness that is exclusively human.
The learning capacity of an animal is restricted to behaviour serving the animal’s biotic and psychic needs. It is an example of the capacity of animals (and plants) to adapt themselves to differing circumstances. In this respect, animals differ from human beings, whose behaviour is not exclusively directed to the satisfaction of biotic and psychic needs.
Besides animal psychology studying general properties of behaviour, ethology is concerned with the characteristic behaviour of various animal species. This does not imply a sharp boundary between animal psychology and ethology. In this chapter, I discussed the general relations constituting the psychic relation frame together with the characters that it qualifies.
Human psychology and psychiatry too are concerned with behaviour, but human behaviour is usually not psychically qualified. Hence, it is not always possible to compare animal with human behaviour. In animals, goal-directed behaviour and transfer of information always concerns psychic and biotic needs like food, reproduction, safety, and survival of the species. In human persons, behaviour may serve other purposes, for instance practicing science.
[1] Purves et al. 1998, 553-554.
[2] McFarland 1999, 62-63
[3] Wallace 1979, 23.
[4] Hogan 1994, 300-301.
[5] McFarland 1999, 174.
[6] Purves et al. 1998, 632-633.
[7] Churchland 1986, 36, 76-77.
[8] Raff 1996, 72.
[9] Hogan 1994, 300-301.
[10] Margulis, Schwartz 1982, 161.
[11] Purves et al. 1998, 810.
[12] Purves et al. 1998, 809-814.
[13] McFarland 1999, 343-346.
[14] McFarland 1999, 204.
[15] McFarland 1999, 340.
[16] McFarland 1999, 278:
[17] About various forms of learning, see Eibl-Eibesfeldt 1970, 251-302; Hinde 1966, chapters 23, 24; Wallace 1979, 151-174; Goodenough et al. 1993, 145; McFarland 1999, part 2.3.
[18] Hinde 1970, chapter 14.
[19] Lehrman 1953.
[20] Cp. Hebb 1953, 108.
[21] Hinde 1966, 426.
[22] Nagel 1977.
[23] Nagel 1961, 402; Ayala 1970, 38.
[24] Mayr 1982, 47-51.
[25] Houston, McNamara 1999.
[26] McFarland 1999, 125-130.
[27] Wilson 1975.
[28] Segerstråle 2000.
[29] Midgley 1985.
[30] Goodenough et al. 1993, chapter 17.
[31] Goodenough et al. 1993, 596.
[32] Darwin 1859, 136-138. |
3742ec4e467f6369 | I know how to calculate them and such stuff, but I wanted to know what they actually signify. I have a vague idea that they have something to do with an electron's position in an atom but what do all of them mean? Any help would be greatly appreciated!
Think about it as the mailing address to your house. It allows one to pinpoint your exact location out of a set of $n$ locations you could possibly be in. We can narrow the scope of this analogy even further. Consider your daily routine. You may begin your day at your home address but if you have an office job, you can be found at a different address during the work week. Therefore we could say that you can be found in either of these locations depending on the time of day. The same goes for electrons. Electrons reside in atomic orbitals (which are very well defined 'locations'). When an atom is in the ground state, these electrons will reside in the lowest energy orbitals possible (e.g. 1$s^2$ 2$s^2$ and 2$p^2$ for carbon). We can write out the physical 'address' of these electrons in a ground-state configuration using quantum numbers as well as the location(s) of these electrons when in some non-ground (i.e. excited) state.
You could describe your home location any number of ways (GPS coordinates, qualitatively describing your surroundings, etc.) but we've adapted to a particular formalism in how we describe it (at least in the case of mailing addresses). The quantum numbers have been laid out in the same way. We could communicate with each other that an electron is "located in the lowest energy, spherical atomic orbital" but it is much easier to say a spin-up electron in the 1$s$ orbital instead. The four quantum numbers allows us to communicate this information numerically without any need for a wordy description.
Of course carbon is not always going to be in the ground state. Given a wavelength of light for example, one can excite carbon in any number of ways. Where will the electron(s) go? Regardless of what wavelength of light we use, we know that we can describe the final location(s) using the four quantum numbers. You can do this by writing out all the possible permutations of the four quantum numbers. Of course, with a little more effort, you could predict the exact location where the electron goes but in my example above, you know for a fact you could describe it using the quantum number formalism.
The quantum numbers also come with a set of restrictions which inherently gives you useful information about where electrons will NOT be. For instance, you could never have the following possible quantum numbers for an atom:
$n$=1; $l$=0; $m_l$=0; $m_s$=1/2
$n$=1; $l$=0; $m_l$=0; $m_s$=-1/2
This set of quantum numbers indicates that three electrons reside in the 1$s$ orbital which is impossible!
As Jan stated in his post, these quantum numbers are derived from the solutions to the Schrodinger equation for the hydrogen atom (or a 1-e$^-$ system). There are any number of solutions to this equation that relate to the possible energy levels of they hydrogen atom. Remember, energy is QUANTIZED (as postulated by Max Planck). That means that an energy level may exist (arbitrarily) at 0 and 1 but NEVER in between. There is a discrete 'jump' in energy levels and not some gradient between them. From these solutions a formalism was constructed to communicate the solutions in a very easy, numerical way just as mailing addresses are purposefully formatted in such a way that is easy that anyone can understand with minimal effort.
In summary, the quantum numbers not only tell you where electrons will be (ground state) and can be (excited state), but also will tell you where electrons cannot be in an atom (due to the restrictions for each quantum number).
Principle quantum number ($n$) - indicates the orbital size. Electrons in atoms reside in atomic orbitals. These are referred to as $s,p,d,f...$ type orbitals. A $1s$ orbital is smaller than a $2s$ orbital. A $2p$ orbital is smaller than a $3p$ orbital. This is because orbitals with a larger $n$ value are getting larger due to the fact that they are further away from the nucleus. The principle quantum number is an integer value where $n$ = 1,2,3... .
Angular quantum number ($l$) - indicates the shape of the orbital. Each type of orbital ($s,p,d,f..$) has a characteristic shape associated with it. $s$-type orbitals are spherical while $p$-type orbitals have 'dumbbell' orientations. The orbitals described by $l$=0,1,2,3... are $s,p,d,f...$ orbitals, respectively. The angular quantum number ranges from 0 to $n$-1. Therefore, if $n$ = 3, then the possible values of $l$ are 0, 1, 2.
Magnetic quantum number ($m_l$) - indicates the orientation of a particular orbital in space. Consider the $p$ orbitals. This is a set of orbitals consisting of three $p$-orbitals that have a unique orientation in space. In Cartesian space, each orbital would like along an axis (x, y, or z) and would be centered around the origin at 0,0. While each orbital is indeed a $p$-orbital, we can describe each orbital uniquely by assigning this third quantum number to indicate its position in space. Therefore, for a set of $p$-orbitals, there would be three $m_l$, each uniquely describing one of these orbitals. The magnetic quantum number can have values of $-l$ to $l$. Therefore, in our example above (where $l$ = 0,1,2) then $m_l$ would be -2, -1, 0, 1, 2.
Spin quantum number ($m_s$) - indicates the 'spin' of the electron residing in some atomic orbital. Thus far we have introduced three quantum numbers that localize a position to an orbital of a particular size, shape and orientation. We now introduce the fourth quantum number that describes the type of electron that can be in that orbital. Recall that two electrons can reside inside one atomic orbital. We can define each one uniquely by indicating the electron's spin. According to the Pauli-exclusion principle, no two electrons can have the exact same four quantum numbers. This means that two electrons in one atomic orbital cannot have the same 'spin'. We generally denote 'spin-up' as $m_s$ =1/2 and spin-down as $m_s$=-1/2.
• 1
$\begingroup$ This is quite helpful, but do you think a little more on the significance of these numbers might be even more helpful? Such as, the energy levels for the principle quantum number or the bonding implications of the angular quantum number? (I don't know enough about the implications of the last two to generalize that much). Also, I feel somewhat like the OP. I can calculate these numbers and I understand that they give us a way to annotate 3D info for an electron, but what does that enable us to do as a result? Why are quantum numbers important for chemistry? $\endgroup$ – Cohen_the_Librarian May 19 '15 at 16:19
• $\begingroup$ @Cohen_the_Librarian I've extensively edited my post to try and address your questions/suggestions. $\endgroup$ – LordStryker May 19 '15 at 17:58
• $\begingroup$ Consider l = 1 (i.e. p orbitals), do the px, py and pz orbitals correspond to ml = -1, 0 and 1 respectively? Is there any correspondence that can be done for the d and f orbitals as well? I understand that it is a matter of perspective but is there a particular convention to assign each value of ml to a particular orbital, be it px or dx-y or what not. $\endgroup$ – Tan Yong Boon Feb 17 '18 at 3:28
The Schrödinger equation for most system has many solutions $\hat{H}\Psi_i=E_i\Psi_i$, where $i=1,2,3,..$. In the case of the hydrogen atom the solutions has a specific notation, which are where the quantum numbers come from.
In the case of the H atom the principal quantum number $n$ refers to solutions with different energy.
For $n>1$ there are several solutions with the same energy, which come in different shapes ($s$, $p$, etc with different angular quantum numbers $l$) that can point in different directions ($p_x$, $p_y$, etc with different magnetic quantum numbers $m$)
These quantum numbers are also applied to multi-electron atoms within the AO approximation.
So the quantum numbers are a way to count (label) the solutions to the Schrödinger equation.
Your Answer
|
893d70ce5ec395f9 | My watch list
Atomic orbital model
The Atomic Orbital Model is the currently accepted model of the electrons in an atom. It is also sometimes called the Wave Mechanics Model. In the atomic orbital model, the atom consists of a nucleus surrounded by orbiting electrons. These electrons exist in atomic orbitals, which are a set of quantum states of the negatively charged electrons trapped in the electrical field generated by the positively charged nucleus. Classically, the orbits can be likened to the planets orbiting the sun. However, the atomic orbital model can only be described by quantum mechanics, in which case the electrons are more accurately described as standing waves surrounding the nucleus.
Additional recommended knowledge
Timeline of development
The developments of the atomic orbital model proceeded along the following timeline:
5th century BC: The ancient Greek philosophers Leucippus and his pupil Democritus proposed that all matter was composed of small indivisible particles called atoms.
1800-1810: Dalton examined the empirical compositions of chemical compounds and proposed a set of rules regarding the properties of the elements and how they combined to form compounds, leading to the Billard Ball theory.
1897: J. J. Thompson published his work on the discovery of the electron. This led to the plum pudding model of the atom.
1905: Albert Einstein demonstrated the photo-electric effect which showed that the frequency of light was proportional to the energy. This discovery would later be used to relate emission and absorption spectra to the electron structure of atoms, leading to the equation E=MC2
1909: Ernest Rutherford led scattering experiments on gold foils (gold-foil experiment), and thereby discovered the nucleus
1911: Rutherford proposed the Rutherford model based on the earlier experimental work in which a cloud of electrons orbit the nucleus.
1913: Niels Bohr proposed the Planetary Model of the atom.
1925: Erwin Schrodinger proposed the Schrödinger equation, which allowed the electrons in an atom to be analyzed quantum mechanically (Quantum Mechanics). This led to the current atomic orbital model of the atom, the Quantum Mechanic model or the Electron Cloud model
Ancient Greeks
Main article: Atomic Theory
The theory of the atom proposed by the ancient Greeks can be summed up in a single thought experiment.
Suppose we take a solid object, and divide that object into two. Now we repeat the process over and over again, continually dividing the remaining piece into two. Will we be able to continue dividing the object indefinitely, or will we come to a point where we find a smallest indivisible particle?
This led to a school of thought that believed that there was a smallest indivisible unit, and this unit was called the atom after the Greek word for unity. Adherents to this philosophy were called atomists.
At this stage, the theory of atoms was more of a philosophical theory than a scientific theory.
Dalton's model
The chemist John Dalton examined the empirical (derived from or guided by experience or experiment) proportions of elements that made up chemical compounds.
At this stage, the atom was still seen as an indivisible object, with no internal structures.
This model is consistent with the concept of an ideal gas as being made up of molecules that exert negligible forces on one another and whose volume is negligible relative to the volume occupied by the gas. This model assumes that each misture component behaves as an ideal gas as if it were alone at a given specific temperature and volume of the mixture
Plum pudding model
Main article: Plum pudding model
The discovery of the electron by J. J. Thomson showed that atoms did indeed have some kind of internal structure. The plum pudding model of the atom described the atom as a "pudding" of positive charge, with negatively charged electrons embedded in this pudding like plums in a plum pudding.
Rutherford model
In 1909, Ernest Rutherford (with Hans Geiger and Ernest Marsden) performed an experiment, which consisted of firing alpha particles into a thin gold foil and measuring the scattering angles of those particles. The results showed that majority of the atom consisted of empty space. In 1911, Rutherford proposed a model to explain the experimental results in which the atom was made up of a nucleus of approximately 10-15 m in diameter, surrounded by an electron cloud of approximately 10-10 m in diameter.
Following this discovery, the study of the atom split into two distinct fields, nuclear physics, which studies the properties and structure of the nucleus of atoms, and atomic physics, which examines the properties of the electrons surrounding the nucleus.
The electrons in the Rutherford model were thought to orbit the nucleus much like the planets orbit the sun. However, this model suffered from a number of problems.
The first is that, unlike the planets orbiting the sun, the electrons are charged particles, and the motion of a charged particle is expected to produce electromagnetic radiation. As an orbiting electron produces electro-magnetic radiation, it loses energy, and would thus spiral into the nucleus.
The second problem was that there was no mechanism to stratify the radius of the orbits. Thus, even if the first problem could be overcome, the second problem meant that there should be a continuous range of electron orbitals available with a continuous range of energies. This in turn would predict that the emission / absorption spectra of atoms would be continuous distributions rather than the highly-peaked line spectra that were observed. (The discovery of the photo-electric effect by Einstein had shown that the frequency of electromagnetic energy was proportional to the energy, and thus that each line in the line spectra corresponded to a very well-defined difference in energy between separate atomic orbitals around the same nucleus).
Bohr model
Main article: Bohr model
After the discovery of the photoelectric effect, the connection between the structure of electrons in atoms and the emission and absorption spectra of atoms became an increasingly useful tool in the understanding of electrons in atoms. The most prominent feature of emission and absorption spectra was that these spectra contained discrete lines. The significance of the Bohr model was that it related the lines in emission and absorption spectra to the energy differences between the orbits that electrons could take around an atom. This was achieved by giving the electrons some kind of wave-like properties. In particular, electrons were assumed to have a wavelength (a property that had previously been discovered, but not entirely understood). The Bohr model was therefore not only a significant step towards the understanding of electrons in atoms, but also a significant step towards the development of the wave/particle duality of quantum mechanics.
The premise of the Bohr model was that electrons had a wavelength, which was a function of its momentum, and therefore an orbiting electron would need to orbit at a multiple of the wavelength. The Bohr model was thus a classical model with an additional constraint provided by the 'wavelength' argument. In our current understanding of physics, this 'wavelength' argument is known to be an element of quantum mechanics, and for that reason the Bohr model is called a semi-classical model.
The Bohr model was able to explain the emission and absorption spectra of Hydrogen. The energies of electrons in the n=1, 2, 3, etc. states in the Bohr model match those of current physics. However, this did not explain similarities between different atoms, as expressed by the periodic table, such as why Helium (2 electrons), Neon (8 electrons), and Argon (18 electrons) exhibits similar chemical behaviour. Modern physics explains this by noting that the n=1 state can hold 2 electrons, the n=2 state can hold 6 electrons, and the n=3 state can hold 10 electrons. In the end, this was solved by the discovery of modern quantum mechanics and the Pauli Exclusion Principle.
Current theory
With the development of quantum mechanics, it was found that the electrons orbiting a nucleus could not be fully described as particles, but needed to be explained by the wave-particle duality. In this sense, the electrons have the following properties:
Wave-like properties
The electrons are never in a single point location, although the probability of interacting with the electron at a single point can be found from the wavefunction of the electron
Particle-like properties
There is always an integer number of electrons orbiting the nucleus.
Describing the electrons
Because the electrons around a nucleus exist as a wave-particle duality, they cannot be described by a location and momentum. Instead, they are described by a set of quantum numbers that encompasses both the particle-like nature and the wave-like nature of the electrons. Each set of quantum numbers corresponds to a wavefunction. The quantum numbers are:
The principal quantum number, n, is analogous to the harmonic of the electrons. That is, the n=1 states are analogous to the fundamental frequency of a wave on a string, and the n=2 states are analogous to the first harmonic, etc.
The azimuthal quantum number, l, describes the orbital angular momentum of each electron. Note that this has no classical analog. The number l is an integer between 0 and (n - 1).
The magnetic quantum number, ml, describes the magnetic moment of an electron in an arbitrary direction. The number ml is an integer between -l and l.
The spin quantum number, s, describes the spin of each electron (spin up or spin down). The number s can be +12 or -12.
These quantum numbers can only be determined by a full quantum mechanical analysis of the atom. There is no way to describe them using classical physical principles. A more technical analysis of these quantum numbers and how they are derived is given in the atomic orbital article.
Furthermore, the Pauli Exclusion Principle states that no two electrons can occupy the same quantum state. That is, every electron that is orbiting the same nucleus must have a unique combination of quantum numbers.
Predictions of the model
Consider two states of the Hydrogen atom:
State 1) n=1, l=0, ml=0 and s=+12
State 2) n=2, l=0, ml=0 and s=+12
Further reading
The details of how atomic orbitals are characterized, and approximations used to calculate them are described in the article atomic orbital.
Details on the structure of electrons in compounds can be found in molecular orbital.
The study of how chemical bonds are formed by orbital electrons to form molecules is discussed in quantum chemistry.
The study of electrons in crystalline materials is a topic of solid state physics and condensed matter physics.
This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Atomic_orbital_model". A list of authors is available in Wikipedia. |
6d34d114d9f89f4a | "On the History of Unified Field Theories. Part II. (ca. 1930 – ca. 1965)"
Hubert F. M. Goenner
1 Introduction
2 Mathematical Preliminaries
2.1 Metrical structure
2.2 Symmetries
2.3 Affine geometry
2.4 Differential forms
2.5 Classification of geometries
2.6 Number fields
3 Interlude: Meanderings – UFT in the late 1930s and the 1940s
3.1 Projective and conformal relativity theory
3.2 Continued studies of Kaluza–Klein theory in Princeton, and elsewhere
3.3 Non-local fields
4 Unified Field Theory and Quantum Mechanics
4.1 The impact of Schrödinger’s and Dirac’s equations
4.2 Other approaches
4.3 Wave geometry
5 Born–Infeld Theory
6 Affine Geometry: Schrödinger as an Ardent Player
6.1 A unitary theory of physical fields
6.2 Semi-symmetric connection
7 Mixed Geometry: Einstein’s New Attempt
7.1 Formal and physical motivation
7.2 Einstein 1945
7.3 Einstein–Straus 1946 and the weak field equations
8 Schrödinger II: Arbitrary Affine Connection
8.1 Schrödinger’s debacle
8.2 Recovery
8.3 First exact solutions
9 Einstein II: From 1948 on
9.1 A period of undecidedness (1949/50)
9.2 Einstein 1950
9.3 Einstein 1953
9.4 Einstein 1954/55
9.5 Reactions to Einstein–Kaufman
9.6 More exact solutions
9.7 Interpretative problems
9.8 The role of additional symmetries
10 Einstein–Schrödinger Theory in Paris
10.1 Marie-Antoinette Tonnelat and Einstein’s Unified Field Theory
10.2 Tonnelat’s research on UFT in 1946 – 1952
10.3 Some further developments
10.4 Further work on unified field theory around M.-A. Tonnelat
10.5 Research by and around André Lichnerowicz
11 Higher-Dimensional Theories Generalizing Kaluza’s
11.1 5-dimensional theories: Jordan–Thiry theory
11.2 6- and 8-dimensional theories
12 Further Contributions from the United States
12.1 Eisenhart in Princeton
12.2 Hlavatý at Indiana University
12.3 Other contributions
13 Research in other English Speaking Countries
13.1 England and elsewhere
13.2 Australia
13.3 India
14 Additional Contributions from Japan
15 Research in Italy
15.1 Introduction
15.2 Approximative study of field equations
15.3 Equations of motion for point particles
16 The Move Away from Einstein–Schrödinger Theory and UFT
16.1 Theories of gravitation and electricity in Minkowski space
16.2 Linear theory and quantization
16.3 Linear theory and spin-1/2-particles
16.4 Quantization of Einstein–Schrödinger theory?
17 Alternative Geometries
17.1 Lyra geometry
17.2 Finsler geometry and unified field theory
18 Mutual Influence and Interaction of Research Groups
18.1 Sociology of science
18.2 After 1945: an international research effort
19 On the Conceptual and Methodic Structure of Unified Field Theory
19.1 General issues
19.2 Observations on psychological and philosophical positions
20 Concluding Comment
19 On the Conceptual and Methodic Structure of Unified Field Theory
From his varied attempts at a unified field theory during the decades covered both in Part I and Part II of this review, it may be concluded that in this field of research Einstein never had or followed a program, in the strict sense, over a longer period of time. He seems to have been resistant to external influences as were, e.g., the fashionable attempts at an inclusion of the meson into all sorts of unitary field theory. Whether the last decade of his research pursuing structural investigations within mixed geometry and directed toward the establishment of field equations, as firmly grounded as his field equations of general relativity, is characterized by “enlightened perseverance” or “biased stubbornness” lies in the eye of the beholder. We shall now try to look from a more general point of view at the detailed and technical discussions given above.
19.1 General issues
Advantages of a theory unifying other theories are: (1) the conceptual structure of the unified theory will in general be richer, (2) its empirical content more inclusive, and (3) the limits of application of the sub-theories covered easier to determine ([221], p. 273, 276). On the first point UFT performs too well: The various forms of Einstein–Schrödinger unified field theory all provide us with too many mathematical objects as to allow a convincing selection of an unambiguous geometrical framework for a physical theory. To quote M.-A. Tonnelat:
“The multiplicity of structural elements brought into the game, the arbitrariness reigning over their interpretation, bring an unease into the theory which one cannot lightly make vanish in total.”([641*], p. 299.)318View original Quote
In addition, particularly within mixed (or even metric-affine) geometry, the dynamics is highly arbitrary, i.e., possible field equations abound. Usually, the Lagrangian is built after the Lagrangian of general relativity, possibly because this theory was required to emerge from UFT in some limiting process. But L. A. Santalò has shown that the “weak” field equations can be reached from a Lagrange function linear in curvature and quadratic in torsion containing 5 free parameters ([524], Theorem 2, p. 350).319 There exists a 3-parameter Lagrangian which is transposition invariant and also leads to the “weak” system (p. 352). The same author has also proven that there are Lagrangians of the same class which are not invariant under λ-transformations (cf. Section 2.2.3) but still lead to the “weak” field equations (p. 351, Theorem 3). Thus, in spite of all symmetry- and plausibility arguments put forward, none of the field equations used by Einstein and Schrödinger acquired an equivalent position of uniqueness like the field equations of general relativity.
As to the 2nd and 3rd points, even if UFT had succeeded as a theory with a well-put particle concept, by the 1940s the newly discovered particles (neutron, mesons, neutrino) would have required another approach taking into account the quantum nature of these particles. Field quantization had been successfully developed for this purpose. Occasionally, the argument has been made that a unification of the two long-range fundamental forces within classical theory would have been enough to be asked for in “pre-quantum physics” ([234], p. 255). However, the end of pre-quantum physics must be set not later than 1925/26; in particular, the development of quantum electrodynamics had started already at the end of the 1920s; it is now part of the partial unification achieved by the Glashow–Salam–Weinberg model (1967). Certainly, quantum field theory suffered from severe problem with infinities to be removed before the observables of the theory could provide numbers to be compared with measurements. At last, renormalization procedures did the job so well that an effect like the Lamb-shift could be calculated, with the inclusion of self-energy contributions, up to highest precision.320
Unlike this, the UFTs of the 1920s to the 1940s did not get to the stage where empirical tests could have been made. Actually, in later developments novel gravito-electromagnetic effects were derived from UFT; cf. Sections 6.1.2, 15.2. Unfortunately, they never led to observed results. Often, it is argued unconvincingly that this is due to the weakness of the gravitational field; for either a strong electrical or a strong gravitational field (neutron stars), measurable effects of the interaction of these fields could have been expected. In a way, UFT of the Einstein–Schrödinger type was as removed from an empirical basis then as quantum cosmology or string theory are at present. Pauli had been aware of this already in the 30s ([489*], p. 789):
“It is odd how Einstein carries on physics nowadays. In effect, it is the method of a pure mathematician decreeing all from his desk who completely has lost contact with what physicists really do.”321View original Quote
Ironically, when H. Weyl had suggested his generalization of Riemannian geometry by a purely mathematical argument, and then had used it to build a theory unifying gravitation and electromagnetism (cf. Section 4.1 of Part I), Einstein had refuted him for not having thought of the empirical consequences. Now, he followed the same course: He started from a mathematical structure and then aimed at turning it into a physical theory. There is a difference, though, because for his own theory Einstein was not able to derive testable consequences:
“The unified field theory now is self-contained. But its mathematical application is so difficult that I have not been capable to test it in some way in spite of all the efforts invested.” (Letter to M. Solovine 12 February 1951, in [160], p. 106.)322View original Quote
The speculative character of UFT was rendered yet more unattractive by its unsolved problems: how to describe matter, in particular the motion of charged particles. Doubts came up very early whether the Lorentz force could be extracted from the theory in the lowest steps of a non-trivial approximation. Instead of winning new results, many authors were content when they were able to reproduce effects already known from general relativity and Maxwell’s theory; cf. Section 15.3. The missing empirical support was critically seen even within the community of workers in unified field theory:
“Unified theories do suggest to base the electromagnetic and gravitational fields on one and the same hyper-field – with the physical phenomena being explained by a geometrical structure imposed on space-time, independently from any phenomenological hypothesis. The ambition of such an explication in the spirit of Cartesian philosophy is recognized which, far from following the observational and experimental results step by step, pretends to anticipate them. The theory incorporates its actual provisions into a vast synthesis and furnishes them with a whole program of a posteriori verifications.” ([94], p. 331) 323View original Quote
Due to his epistemological and methodical position, Einstein could not have cared less. With no empirical data around, he fitted the envisaged UFT to the various mathematical structural possibilities. As has been shown in great detail, originally, when struggling with a relativistic theory of gravitation, he had applied two methods: “induction from the empirical data” and “mathematical deduction” ([309], p. 500–501). In his later work, he confined himself to the second one by claiming that only mathematical simplicity and naturalness could lead to a fundamental theory reflecting unity. Intuition is played down by him in favour of quasi-axiomatic principles. This shift in Einstein’s epistemology and methodology has been described in detail by J. D. Norton [458], D. Howard [290], and J. van Dongen [667]. However, we should not forget that both concepts, simplicity and naturalness, lack unambiguous mathematical or philosophical definitions.
In this context, Einstein’s distinction between “constructive theories” and “theories of principle” may also be considered. The first ones are constructive, they “[…] attempt to build up a picture of the more complex phenomena out of the materials of a relatively simple formal scheme from which they start out […]”. The second important class called principle-theories “employ the analytic, not the synthetic, method. The elements which form their basis and starting-point are not hypothetically constructed but empirically discovered ones, general characteristics of natural processes, principles that give rise to mathematically formulated criteria which the separate processes or the theoretical representations of them have to satisfy.” ([157], p. 228) According to Einstein, the theory of relativity belongs to the second class with its “logical perfection and security of the foundations”. His unified field theory fits better to the description of a constructive theory.
The more delicate question why the unification of the fundamental forces must be sought by a geometrization of the fields, was rarely asked. In Weyl’s approach, a pre-established harmony between mathematics and physics had been put forward as an argument. Y. Mimura and T. Hosokawa saw the “mission of physics” in looking for answers to the questions: “What is space-time in the world wherein physical phenomena occur?” and “By what laws are those physical phenomena regulated?” Their idea was that the properties of space-time are represented by physical laws themselves. “Thus theoretical physics becomes geometry. And that is why physical laws must be geometrized” ([426], p. 102). This circular remark of 1939 is less than convincing. Other paths could have been followed (and later were), e.g., one along a unifying (symmetry) group.324 A renowned scientist like P. A. M. Dirac shied away altogether from such a big sweep as unification is. He favoured an approximative approach to an eventually all encompassing theory: “One should not try to accomplish too much in one stage. One should separate the difficulties in physics one from another as far as possible, and then dispose of them one by one” ([125], quoted from [338], p. 373). On aesthetical grounds, Dirac came closer to Einstein:
When Einstein geometrized gravitation, he had a good argument in the equality of inertial and gravitational mass. For electrodynamics and UFT, no such argument has been presented. In the 1950s, “charge-independence” as a property of strong interactions between baryons and mesons was discussed but not used for geometrization.325
19.1.1 What kind of unification?
According to Bargmann, the aim of UFT was: “(1) to deduce, at least in principle, all physical interactions from one law, (2) to modify the field equations in such a way that they would admit solutions corresponding to stable charged particles” ([12], p. 169). This general description of Einstein’s eventual course for describing fundamental aspects of physical reality (nature) by one single theory can be complemented by further, more specific, details.
In Sections 8.1 and 10.3.4 we have mentioned Pauli’s criticism with regard to the use of g = h + k ij (ij) [ij] in Einstein’s and some of Schrödinger’s unified field theories. As Mme. Tonnelat noted, this meant that the theory is unified only in “a weak sense” because the gravitational and electromagnetic fields are represented by different geometrical objects. Apart from the demand that the fundamental field quantities (metric, …) must be irreducible with regard to the diffeomorphism group, Einstein had claimed symmetry with regard to λ-transformations, because these would mix the symmetric and skew-symmetric parts of the connection and thus counter criticism of the type Pauli had phrased. A further necessary condition for a unified field theory has been formulated: the Langrangian must not decompose into irreducible parts, i.e., it must not be expressed as a sum of several scalar densities but consist of a single “unified” term (cf. [308], p. 786). In principle, this was accepted also by Einstein as reported in Section 7.1. Sciama’s unified field theory [567] forms an example326. A. Lichnerowicz called a theory “unitary in the strict sense”
“[…] if the exact field equations control an indecomposable hyper-field, and themselves cannot be fractionized into propagation equations of the gravitational and of the electromagnetic field but approximatively […].” ([371], p. 152)327View original Quote
In Kaluza–Klein theory the gravitational and electromagnetic fields are encased in one and the same geometrical object, the 5-dimensional metric. Just one term in the Lagrangian is needed. By M.-A. Tonnelat, such a theory has been called “unified in a strong sense” [627, 641], p. XVII.328 In his thesis on a generalization of Kaluza–Klein theory, Y. Thiry had defined a unitary theory for two fields by the requirements (1) that the two fields emanate from the same geometry, and (2) that they amalgamate into one hyper-field of which they represent nothing more than two different aspects ([606], p. 13). An example for a hyper-field taken from history would be the electromagnetic field tensor within special relativity.
Bargmann’s second demand placed on UFT, the existence of stable solutions describing charged particles, remained unfulfilled within the Einstein–Schrödinger theories. With singularities of the fields being excluded, not even a satisfactory definition of a particle (beyond the concept of test particle) could be given in such classical field theories. Einstein was very aware of this when he wrote to Besso ([163*], p. 438): “E.g., a ‘particle’ in the strict sense of the word does not exist, because it does not fit to the program of representing reality by everywhere continuous, even analytical fields.”329View original Quote Even today, a convincing definition of an interacting particle apparently does not exist. We have free fields describing particles and interaction terms introduced in the Lagrangian. Attempts at creating the concept of a single particle including its interaction with other particles have been attempted, unsuccessfully, e.g., one by G. Pétiau [494]. A precise definition of an interacting particle as a member of an ensemble of particles seems not outdated, but out of reach.
Methodological questions could be added. Why did Einstein, Schrödinger and most others working in unified field theory start with a metric and a connection as independent variables and then link them through a condition for the covariant derivative of the metric? To solve the latter condition for the connection has used up an immense amount of energy and time (plus printed paper) as we have seen before. Were they afraid of going one step back behind H. Weyl and other mathematicians who had recognized the independence of the concepts of metric and parallel transport? It might have been more direct to generalize, in a systematical investigation, the Levi-Civita connection (Christoffel symbol) by building the connection as a functional of the symmetric and skew-symmetric parts of the metric and their first derivatives as Hattori and Eisenhart have done.
19.1.2 UFT and quantum theory
Einstein’s position with regard to quantum mechanics, particularly his resistance to the statistical interpretation of it is well known, cf. [585, 586]. In this context, his abortive attempts at presenting contradictions within the Copenhagen interpretation of quantum mechanics during the 1927 Solvay conference in Brussels may be remembered. Ehrenfest’s remark there is quoted by Heisenberg: “I’ m ashamed of you, Einstein. You put yourself here just in the same position as your opponents in their futile attempts to refute your relativity theory” ([248], p. 107). A decade later, Einstein’s comment on Bohr’s riposte to the EPR-paper as reported by Léon Rosenfeld was that Bohr’s position was logically possible, but: “so very contrary to my scientific instinct that I cannot forego the search for a more complete conception.” ([518], p. 131).330 In his judgment on quantum mechanics, Einstein differed strongly from Pauli’s who believed in the completeness of quantum mechanics:
“This generalization of the concept ‘state’ which involves a strict renouncement of a lawful description of the single, individual system for me seems to be necessary and, by the way also understandable, in view of the facts mentioned earlier. It is a consequence of the influence, unknown in principle, on the system being observed by the chosen measuring device. Due to this state of affairs, only as a consequence of this renouncement of a lawful description of an individual system seemingly is it possible to continue using the conception of ‘closed system’ and the closely related notion of space and time. In this sense, I deem the description of quantum mechanics to be complete.” ([489], p. 520–521.)331View original Quote
Therefore, Pauli questioned whether it was possible to unify the gravitational and electromagnetic fields in a classical field theory without taking note of “those facts in which the quantum of action plays an essential role”. In fact, already at the time suggestions for a “unitary” field theory in the framework of quantum (field) theory were made: E. Stueckelberg saw electron, proton, neutron and the then only neutrino as states of one more fundamental elementary particle; the unitary field is a spinor with 16 components [594]. This was a step further than the neutrino theory of light (cf. Section 1). Heisenberg’s later program of non-linear spinor theory as kind of a unified quantum field theory [247], although unsuccessful, belongs into this category. We are not surprised about Dirac’s opposition: In a letter to Heisenberg of 6 March, 1967 he wrote:
“My main objection to your work is that I do not think your basic (non-linear field) equation has sufficient mathematical beauty to be a fundamental equation of physics. The correct equation, when it is discovered, will probably involve some new kind of mathematics and will excite great interest among the pure mathematicians, just like Einstein’s theory of the gravitational field did (and still does). The existing mathematical formalism just seems to me inadequate.” (Quoted from [339*], p. 281.)
Perhaps, Dirac’s foible for linear equations was behind this judgment. Much earlier, in 1942, after a colloquium in Dublin in which Eddington and Dirac had taken part, Schrödinger complained to Max Born:
“Your idea of getting their opinion on Born’s theory is pathetic.332 That is a thing beyond their linear thoughts. All is linear, linear, – linear in the n’th power I would say if there was not a contradiction. Some great prophet may come …‘If everything were linear, nothing would influence nothing,’ said Einstein once to me.” ([446*], p. 272.)
In fact, quantum mechanics and quantum field theory live very much on linearity (Hilbert space, linear operators). In principle, this does not forbid quantization of non-linear classical field theories like the non-linear electrodynamics of Born and Infeld (cf. Section 5). Already in 1933, in connection with this non-linear electrodynamics, Max Born confronted Einstein’s opinion:
“For a long time Einstein had advocated the point of view that there must be a non-linear field theory containing the quantum laws. We have here a non-linear field theory, but I do not believe that the quantization can be dispensed with. […] I believe the following: every theory built up on classical foundations requires, for the completion of its assertions, an extension by initial and boundary conditions, satisfying only statistical laws. The quantum theory of the field provides this statistical completion […] through an inner fusion of the statistical and causal laws.” ([37], p. 434, 2nd footnote.)
Hence, Born’s attempts at a theory compatible with both gravitation and quantum theory definitely left the framework of UFT [39]. It seems that M.-A. Tonnelat accepted Born’s point of view.
A number of authors we have met before felt entitled to give general or very specific comments on the relationship between UFT and quantum theory. J. Callaway came to the conclusion that: “his [i.e., Einstein’s UFT] theory will either be able to handle quantum phenomena or it will fail completely.” ([70], p. 780.) Moffat & Boal 1975 [443] just guessed: “It could be that the main significance of the λ-gauge transformations lies in the fact that it may influence the renormalizability of the theory. […]. It is possible that the unified field theory described here is renormalizable, because of its invariance under the extended gauge group of transformations.” Within JordanThiry theory as a “strongly unified” theory in the sense of M.-A. Tonnelat, the idea that its non-linearity would lead to elementary particles seemingly was given up. Moreover, in Sections 4.2, 4.3 and 16.2 we encountered two attempts, both also unsuccessful, toward a synthesis of classical field theory and quantum theory in the frameworks of wave geometry and 5-dimensional relativity. At the GR-2-Conference in Royaumont in 1959, Ph. Droz-Vincent spoke about the quantization of the theory’s linear approximation; a mass term was introduced by hand. The theory then was interpreted as “a unitary theory of graviton-photon” ([132], p. 128).
A desperate argument, from the point of view of physics, in favour of UFT was advanced at the same conference by mathematician A. Lichnerowicz: there would be many good experiments in quantum theory, but no good (quantum field) theories. In UFT at least, we would have a theory with a definite mathematical meaning ([120*], p. 149). For some like A. Proca, the hope in Einstein’s genius overcame a sober assessment: “Convinced that every ‘field’ could be subjected to a theory of the type he had developed, he [Einstein] nurtured the ‘modest hope’ that such a theory possibly would bring forth the key to quantum theory. [500]333View original Quote
In fact, Einstein’s claim that unified field theory would supersede quantum mechanics as a foundation for physics, could not be strenghtened by a recipe by which elementary particles were generated from classical field distributions. The concept of geon (“gravitational electromagnetic entity”) was introduced by J. A. Wheeler in 1955 [696] in order to form a classical model for an elementary particle. It turned out to be a tinkering with the global topology of solutions. Although approximate solutions of Einstein’s vacuum equations describing geons have been found (cf. [60]), they have not been proven to be stable entities. According to Wheeler
“A geon has exactly the property of being only an approximate solution; or rather, an accurate solution which is not fully stable with time – it leaks energy. Thus it is not in agreement with one’s preconceived idea that there should be a particle-like solution that is fully stable; but aren’t we being very brash if we say that the world isn’t built that way? […] Perhaps the stability of the particles we know is due to some intrinsic quantum character, which we cannot expect to show up before we have gone to the quantum level.” ([120*], p. 149)
Perhaps, this new concept led M.-A. Tonnelat once again to an optimistic comment ([635], p. 9):
“To the best of hypotheses, the unitary theories seem to explain, by classical methods, the formation of corpuscular structures out of the unified field. This attribution of particles to the field, postulated so energetically by Einstein, obviously is in a much too embryonic state to naturally explain the existence of different types of elementary particles.” 334View original Quote
Two years before, P. Bergmann’s programmatic statement at the Chapel Hill conference, i.e.,
“The original motivation of unified field theory is get a theory of elementary particles, which includes electrons and not only hyper-fragments, and furthermore obviate the need for quantization which would result from the intrinsic non-linearity”
had been instantly put into doubt by R. Feynman:
“Historically, when the unified field theory was first tackled, we had only gravitation, electrodynamics, and a few facts about quantization, […]. In the meantime, the rest of physics has developed, but still no attempt starts out looking for the quantum effects. There is no clue that a unified field theory will give quantum effects.” ([120], p. 149.)
Quantum mechanics, in particular as the measuring process is concerned, seemed not to have reached a generally accepted final interpretation. It looks as if Dirac wished to exploit this situation for making unitary field theory more respectable:
“And I think that it is quite likely that at some future time we may get an improved quantum mechanics in which there will be a return to determinism and which will, therefore, justify Einstein’s point of view.” ([126], p. 10.)
Dirac had in mind that application of the present quantum mechanics should not be pushed too far into domains of highest energy and smallest distances (p. 20). In view of the current brilliant empirical basis of quantum field theory, and the failure of all attempts to built a hidden-parameter theory, Dirac’s remark is far from supporting Einstein’s classical unified theory.
Einstein was well aware of the shortcomings of his “theory of the asymmetric field”. The last paragraph of Appendix II in the 5th Princeton edition of The Meaning of Relativity reads as:
“One can give good reasons why reality cannot at all be represented by a continuous field. From the quantum phenomena it appears to follow with certainty that a finite system of finite energy can be completely described by a finite set of numbers (quantum numbers). This does not seem to be in accordance with a continuum theory, and must lead to an attempt to find a purely algebraic theory for the description of reality. But nobody knows how to obtain the basis of such a theory” [158].
His remark in a letter of 10 August 1954 to M. Besso led into the same direction: “Yet, by all means, I consider it as possible that physics cannot be founded on the field concept, i.e., on continuous structures. In this case, from my whole castle in the air, gravitational theory included, but also from the rest of contemporary physics nothing remains.”335View original Quote
And, a fortnight before his death, he wrote that he did not want to dispense with “a complete real-description of the individual case” but also:
“On the other hand, it is to be admitted that the attempt to comprehend the undoubtedly atomistic and quantum-structure of reality on the basis of a consequential field theory encounters great challenges. By no means am I convinced that they can be overcome”336View original Quote ([159], p. XVI )
19.1.3 A glimpse of today’s status of unification
That unified field theory of the Einstein–Schrödinger type had become obsolete, was clear to theoretical physicists since the mid 1950s. In his introduction to the first conference on gravitation in Japan, in 1962, Nobel prize winner R. Utiyama wrote: “[…] it was no exaggeration to say that the old-fashioned mathematical investigation of Einstein’s theory was not regarded as a field of physics but rather a kind of mathematical play or a kind of metaphysics” ([662], p. 99). Einstein’s former assistant P. G. Bergmann was less harsh: “Einstein spent the last five years of his life investigating this theory (the ‘asymmetric’ theory) without arriving at clear-cut answers. At the present time, all unified field theories must be considered speculative. But for a scientist who believes in the intrinsic unity of the physical universe, this speculative inquiry has an irresistible attraction” ([22], p. 492).
The idea of unifying all fundamental physical interactions in one common representation is as alive today as it was in Einstein’s times. Its concrete realizations differ from UFT in important points: quantum fields are used, not classical ones, and all four fundamental interactions are taken into account – in principle. At first, Grand Unified Theories (GUTs) unifying only the electromagnetic, weak, and strong interactions were considered, e.g., with gauge group SU (5). The breaking of this symmetry to the symmetry of the standard model of elementary particles SU (3) × SU (2) × U(1) required the introduction of Higgs fields belonging to unattractive large representations of SU (5) [561]. The concept of “spontaneous symmetry breaking” implying a dynamics exhibiting the full symmetry and a ground state with less symmetry is foreign to the Einstein–Schrödinger type of unitary theory. The GUTs studied have made predictions on the occurrence of new particles at a mass-scale (GUT-scale) outside the reach of present particle accelerators, and on the existence of topological defects such as cosmic strings, or domain walls. None were detected up to now. There was also a prediction on the decay of the proton by minimal SU (5) which remained unsupported by subsequent measurements. Also, the simplest SU (5)-GUT does not bring together in one point the different energy-dependent couplings. This would be accomplished by a supersymmetric SU (5 )-GUT.
Apparently, a main purpose of string theory has been to consistently unify all gauge interactions with gravity. However, “string phenomenology”, i.e., the search for the standard model of elementary particles in (supersymmetric) string theory, has not been successful in the past 30 years. An optimistic assessment would be: “to obtain a connection between (string) theory and present (standard model) experiments is possible in principle but difficult in practice” ([451], p. 10).
The attempted inclusion of gravitation causes enormous conceptual and calculational difficulties which have not yet been overcome by candidates for “Theories of Everything” (TOEs), as are superstring theory,337 M-theory,338 brane-world scenarios339 etc. In the mid 1990s, the 5 existing superstring theories in time plus a 10-dimensional space340 via dualities have been shown to reduce, effectively, to one remaining theory. The extra dimensions are a means of allowing gravity to propagate into these dimensions while the other fundamental forces may be confined to four-dimensional spacetime. Problems caused by the number of additional spacelike dimensions required in modern unified theories are the unknown physics acting in them and the unacceptably large number of possibilities for space-time: the extra dimensions can be compactified in a giant number of different ways estimated to amount to 10500 (string theory landscape). A way out has been claimed by adherents of the multiverse-speculation: only a small number of the ground states are claimed to be “habitable”. Thus, the fundamental constants of the universe would not be explained by physics but by some form of the anthropic principle. Up to now superstring- or M-theory have not been able to make explicit predictions about large distance physics. A recent presentation of string theory is given in [14].
In contrast to UFT, the modern theory of unified fields in the form of a set of rules and hopes purported by superstring theory has inspired greatly the development of some mathematical disciplines. Conceptually, non-Abelian gauge theory, supersymmetry and their geometrical realizations, as well as the renormalization group [461] now are part of the game. Even speculations about unification of such different objects as are elementary particles, microscopic black holes, and string states have been presented [522] – not to speak about even more speculative objects like black branes and blackfolds [193].
It seems that today’s discussions divide theoretical physicists into two groups: those striving for a “Theory of Everything”341 as the modern equivalent to UFT, and those believing that this is the kind of reductionism already disproved by present physical theory, in particular by many-particle-phenomena [350]. In one of today’s approaches, conceptual unification, i.e., the joinder of Heisenberg’s uncertainty relations to gravitational theory (general relativity) and grand unification, has been set apart from the unification of spin-2-particles with all the other (elementary) particles, cf. [521].342 The second point listed above could be expressed in more generality as: “One of the main goals of unified theory should be to explain the existence and calculate the properties of matter” ([236], p. 288). True, the present designs of TOEs have absorbed an immense amount of knowledge gained, theoretically and empirically, since the old days of classical UFT. Nevertheless, the elementary particle mass-spectrum is as unexplained today as it was then. In spite of the lauded Higgs-mechanism, the physical origin of mass is far from being understood. There seems to be a sizeable number of physicists with reserved attitude toward modern unified field theories.
19.2 Observations on psychological and philosophical positions
19.2.1 A psychological background to UFT?
According to guesswork propagated in the community, one of the reasons why both Albert Einstein and Erwin Schrödinger engaged in their enduring search for a unified field theory was that they hoped to repeat their grand successes: general relativity and its positive empirical test in 1919, and wave mechanics with its quick acceptance, in 1926. Schrödinger wrote to one of his lady lovers in January 1947 that he “had completely abandoned all hope of ever again making a really important contribution to science.” But now, it looked as if he had been sent to Ireland by “the Old Gentleman” to live freely, without direct obligations, and follow his fancies. Which, of course, had brought him to the present final solution of how to set up unified field theory. His biographer W. Moore claimed that Schrödinger “was even thinking of the possibility of receiving a second Nobel prize” ([446] p. 434). In a way, world fame seems to cause addiction.
With Einstein, matters were more complicated. He certainly had no such wishes as Schrödinger; instead, he enjoyed using his fame as a propellant for making known his views, in other fields than in physics, in public.343 But, why would he begin anew with mixed geometry after roughly two decades? Did he want to not leave the territory to Schrödinger who had wandered into it since 1943 and had believed to be able to do better than Einstein? Einstein did not strive for a second Nobel prize, but was tied up by his philosophical thinking about reality and causality. It led him to believe that the epistemological basis of the ideas that had once lead to a splendid result, the geometrization of the gravitational field within the framework of the continuum, by necessity must work again.
“The gravitational equations could be found only due to a purely formal principle (general covariance), i.e., due to faith in the imaginably greatest logical simplicity of the laws of nature. Because it was evident that the theory of gravitation was but a first step toward the discovery of field laws, it appeared to me that this logical way first must be thought to its end; only then one can hope to arrive at a solution to the quantum problem.” Einstein to de Broglie 8 February 1954 ([580], Appendix A.2.8.)344View original Quote
In the same letter, Einstein explained why he did not want to look like an “[…] ostrich permanently hiding his head in the relativistic sand in order to not have to face up to the evil quanta.”345View original Quote He thought this explanation could interest de Broglie “from a psychological point of view, […].”
That successful physicists tend to come back to fruitful ideas in their previous work can be seen also in W. Heisenberg. In his case, the idea was to build theories just containing observables. His introduction of the S-matrix followed this objective he had applied when making the transition to quantum mechanics. Although valuable for scattering probabilities, and followed and extended by a number of well-known theoreticians, it has been criticized as having held up progress in elementary particle physics (Yang–Mills theory) [423].
At the very end of his life, Einstein was disappointed; he blamed “the physicists proper” for not understanding progress made by him. This is shown by his letter to Hans Mühsam of February 22, 1955:
“However, recently I decisively made progress. It refers to an improvement of the theory as far as its structure is concerned, but not with regard to finding solutions which could be examined through the [empirical] facts. […] The matter is very plausible in itself and as perfect as to find increased interest among mathematicians. The physicists proper are rejecting it because they happened to maneuver themselves into a dead end – without noticing.” 346View original Quote
The reference to “the mathematicians” leads us back to Einstein’s “logical-philosophical” thinking (letter to M. Solovine, 12 February 1951) in his later years as compared to physical argumentation. Fortunately, Einstein did not live long enough to be confronted with the deadly blow by Wyman & Zassenhaus (Section 9.6.2) to his idea that knowledge of regular exact solutions would bring an advancement in the understanding of UFT.
From a point of view from outside the UFT-community, M. Fierz assessed the whole endeavor. In a letter to Pauli of October 9, 1951 he compared UFT with a particular tendency in psychology: “Likewise, the field-concept is analogous to the idea of milieu. Today, group-psychology is a big fashion in England and America. It is somewhat like Einstein’s unitary field theory in the sense that the collective milieu is to explain everything such as the general field is to contain the whole of physics. Human beings are thus downgraded to herd animals – are made mechanical […].”347View original Quote ([490], p. 382.) His comparison seems to be a bit far-fetched, though. Perhaps, he felt that the field concept was no sufficient substitute for the notion of particle.
S. Schweber found a parallel between the manner of Einstein’s theorizing and his views regarding world government and the organization to be established for preventing war ([563], p. 96).
19.2.2 Philosophical background
Often, mathematicians tend to be attracted by Platonic philosophy which assumes the existence of a world of ideas characterized by concepts like truth and beauty – with the possibility of only an approximative empirical approach to it. As H. Kragh pointed out in his biography of Dirac, since the 1930s Dirac supported the claim that, by following mathematical beauty, important advances in theory can be made ([339*], p. 282). Kragh distinguished two aspects as to the function of such a “principle of mathematical beauty”: it may serve as a recommendation for the process of theory-building, but it also may be used for a justification of a theory without strong empirical footing. In his Spencer Lecture of 1933, Einstein did not content himself with mathematical beauty:
“I am convinced that we can discover by means of purely mathematical constructions the concepts and the laws connecting them with each other, which furnish the key to the understanding of natural phenomena. Experience may suggest the appropriate mathematical concepts, but they most certainly cannot be deduced from it. Experience remains, of course, the sole criterion of physical utility of a mathematical construction. But the creative principle resides in mathematics. In a certain sense, therefore, I hold it true that pure thought can grasp reality, as the ancients dreamed.”348View original Quote
Pauli found astonishing “[…] Einstein’s habit to call all those content with quantum mechanics ‘positivists’, ‘empiricists’, ‘sensualists’, ‘phenomenalists’, or the like.”349View original Quote In a personal note by him, we can read: “According to Einstein, proponents of quantum mechanics should say: ‘the description is incomplete, yet its completion is meaningless because it has nothing to do with laws of nature.’ Reduces to the question whether something about which no knowledge can be obtained exists. Einstein comments his own field theory ‘these laws are in Heaven, but not on Earth.”350View original Quote cf. ([491*], 11 April 1953, p. 110).
After having worked on unified field theory since 1925, and having moved farther and farther away from experimental or observational evidence, Einstein needed such an epistemological and methodical justification for his research. He was convinced that there “is no logical path leading from the empirical material to the general principle which then supports logical deduction. […] The further theory progresses, the clearer it becomes that the fundamental laws cannot be found inductively from the empirical facts (e.g., the gravitational field equations or the Schrödinger equation of quantum mechanics).” ([163*], p. 468.)351View original Quote Also, while Einstein ascribed a “creative principle” to mathematics, he just used this discipline as a quarry for the building of physical theories. After his decision for mixed geometry, a further creative influence of mathematics on his work can hardly be found. A mathematician who had followed the work in UFT by Einstein and others until 1945, stated bluntly:
“[…] the failure up to 1945 of the attempts at a unified theory might have been anticipated: each attempt was geometry and nothing more. The truism that, to get something empirically verifiable out of mathematics, something empirically known must be put into mathematics, appeared to have been overlooked” [18].
Now, many years later, we are permitted to extend Bell’s date past 1945 until the end of the 1960s.
What is reality?
A central matter of dispute was Einstein’s conception of reality. Philosophers of science have defined different categories of realism; to give as an example the one underlying the EPR-paper. Einstein’s position then is classified accordingly; cf. [197], its review [62], and an interpretation of Einstein’s understanding of locality and separability [289]. The following lines just reflect some aspects which have come up during this review.
W. Pauli formulated his opinion on the difference between Einstein and quantum physicists in a letter of 29 June 1953 to the Austrian philosopher F. Kröner (1989 – 1958): “It is due to the Einstein’s restrictive philosophy’ whereby an ‘objective description of nature’ is only such a description which demands potential validness without explicit reference to observation** (‘detached observers’).” ([491*], p. 184–185.)352View original Quote The annotation** reads as: “cf. also the final sentence of Einstein’s ‘The Meaning of Relativity’, 4. Ed. 1953, Princeton University Press.” The last two sentences in [156] have been quoted already toward the end of Section 9.3. In a letter to Heisenberg of 5 Juli 1954, Pauli explicated this:
“Essentially Einstein begins with a ‘realistic metaphysics’ (NB not with a deterministic one) assuring him a priori that observation cannot generate a state; e.g., if an observation leads to a (‘quasi-sharp’) position, then, in the ‘objective-realistic description’ of nature, even before the observation an ‘element’ would have been there which somehow ‘corresponds’ to the result of the observation. From it, Einstein infers his realistic dogma that in the ‘objective-realistic description’ the position of an electron ought to be determined ‘quasi-sharply’ always (in all states), i.e., up to quantities of ca. 10−13 cm. Likewise, the position of the moon is determined independently of how we look at it. […] The background of Einstein’s realistic metaphysics is formed by his belief that only it can ensure differentiation between the ‘real’ and what is merely imagined (dream, hallucination).” ([491*], p. 706–707.)353View original Quote
In the same direction went Pauli’s letter to Max Born of 3 March 1954:
In conversations with Einstein, I have now seen that he takes exception to the essential premise of quantum mechanics that the “state” of a system is defined only by the specification of an experimental arrangement.[…] Einstein absolutely does not want to accept this. If one could measure precisely enough, this would be as true for macroscopic beadlets as for electrons. […] But Einstein keeps the “philosophical” prejudice that (for macroscopic bodies) under all circumstances a “state” (said to be “real”) can be “objectively” defined. This means without assigning an experimental set-up with the help of which the system (of macroscopic bodies) is investigated […]. It appears to me that the discrepancy with Einstein may be reduced to this (his) assumption which I have called the notion or the “ideal” of the “detached observer”.354View original Quote ([491], p. 509–510.)
In fact, for Einstein the quantum mechanical state function ψ could not be interpreted as a “Realzustand”: “The Realzustand cannot at all be described in present quantum theory but only (incomplete) knowledge with regard to a Realzustand. The ‘orthodox’ quantum theorists ban the concept of Realzustand in the first place (due to positivist considerations)”. Einstein to Besso 10 August 1952 ([163], p. 483).355View original Quote This dispute might also be taken as an example for debaters using different categories: with Einstein arguing from ontology and Pauli methodologically. Modern experiments have vindicated Pauli’s judgement about quantum physics and made obvious “the failure of Einstein’s attempt to show the incompleteness of quantum theory.” ([459], p. 182)
Until his death, Einstein insisted upon describing reality by a continuous field theory. In a letter of 16 October 1957 to Fierz, Pauli traced Einstein’s attitude to an ancient philosophical dispute:
“I do not doubt that classical field physics pretty directly originates from the Stoa, in a continuous trend passing the ideas of the Renaissance and of the 17th century […]. Insofar the synthesis of quantum theory and general relativity (and, generally, field quantization) is an unsolved problem, the old (ancient) conflict between atomists and the stoics continues.”356View original Quote ([493], p. 571)
Go to previous page Scroll to top Go to next page |
bae3a7d6e5cecdfe | WikiJournal of Science/A card game for Bell's theorem and its loopholes
From Wikiversity
Jump to navigation Jump to search
WikiJournal of Science logo.svg
WikiJournal of Science
Wikipedia-integrated • Public peer review • Libre open access
Journal issues
<meta name='citation_doi' value='10.15347/wjs/2018.005'>
Article information
Authors: Guy Vandegrift[i], Joshua Stomel
Vandegrift, G; Stomel, J.
In 1964 John Stewart Bell made an observation about the behavior of particles separated by macroscopic distances that had puzzled physicists for at least 29 years, when Einstein, Podolsky and Rosen put forth the famous EPR paradox. Bell made certain assumptions leading to an inequality that entangled particles are routinely observed to violate in what are now called Bell test experiments. As an alternative to showing students a "proof" of Bell's inequality, we introduce a card game that is impossible to win. The solitaire version is so simple it can be used to introduce binomial statistics without mentioning physics or Bell's theorem. Things get interesting in the partners' version of the game because Alice and Bob can win, but only if they cheat. We have identified three cheats, and each corresponds to a Bell's theorem "loophole". This gives the instructor an excuse to discuss detector error, causality, and why there is a maximum speed at which information can travel.
The conundrum
Although this can be called a theorem, it might be better viewed as something "spooky" that has been routinely observed, and is consistent with quantum mechanics. But this puzzling behavior violates what might be called common notions about what is and is not possible.[1][2] Students typically encounter a mathematical theorem as an incomprehensible statement that cannot be digested until it is first proven and then applied in practice. It is not uncommon for novices to refer to some version of Bell's inequality as Bell's theorem because the inequality can be mathematically "proven".[3] The problem is that what is proven turns out to be untrue.
David Mermin described an imaginary device not unlike that shown in Fig. 1, and refers to the fact that such a device actually exists as a conundrum, then pointed out that many physicists deny that it is a conundrum.[4]
A simple Bell's theorem experiment
It is customary to name the particles[5] in a Bell's theorem experiment "Alice" and "Bob", an anthropomorphism that serves to emphasize the fact that a pair of humans cannot win the card game ... unless they cheat. To some experts, a "loophole" is a constraint on any theory that might replace quantum mechanics.[6] It is also possible to view a loophole as a physical mechanism by which the outcome of a Bell's theorem experiment might seem less "spooky". In this paper, we associate loopholes with ways to cheat at the partners' version of the card game. It should be noted that the three loophole mechanisms introduced in this paper raise questions that are even spookier than quantum mechanics: Are the photons "communicating" with each other? Do they "know" the future? Do they "persuade" the measuring devices to fail when the "cards are unfavorable"?[7]
Since entanglement is so successfully modeled by quantum mechanics, one can argue that there is no need for a mechanism that "explains" it. Nevertheless, there are reasons for investigating loopholes. At the most fundamental level, history shows that a successful physical theory can be later shown to be an approximation to a deeper theory, and the need for this new theory is typically associated with a failure of the old paradigm. It is plausible that a breakdown of quantum mechanics might be discovered using a Bell's theorem experiment designed to investigate a loophole. But the vast majority of us (including most working physicists) need other reasons to care about loopholes: Many find it interesting that we seem to live in a universe governed by fundamental laws, and Bell's theorem yields insights into the bizarre nature of those laws. Also, those who teach can use these card games to motivate introductory discussions about statistical inference, polarization, and modern physics.
Bell card house.svg
Figure 1 | The outside casing of each device remains stationary while the circle with parallel lines rotates with the center arrow pointing in one of three directions (, , .) If Jacks are used to represent these directions, Alice will see J as her question card. She will respond with an "odd"-numbered answer card (3) to indicate that she is blocked by the filter. If Bob passes through a filter with the "spade" orientation, he sees J as the question card, and answers with the "even" numbered 2. This wins one point for the team because they gave different answers to different questions.
Figure 1 shows a hypothetical and idealized experiment involving two entangled photons simultaneously emitted by a single (parent) atom. After the photons have been separated by some distance, each is exposed to a measurement that determines whether the photon would pass or be blocked by the polarizing filter.[8] To ensure that the results seem "spooky" it should be possible to rotate the filter while the photons are en route so that the filter's angle of orientation is not "known" to either photon until it encounters the filter. If the filters are rotated between only three polarization angles, we may use card suits (hearts , clubs ♣, spades ♠) to represent these angles. These three polarization angles are associated with "question" cards, because the measurement essentially asks the photon a question:
"Will you pass through a filter oriented at this angle?"
For simplicity we restrict our discussion to symmetric angles (0°, 120°, 240°.) The filter's axis of polarization is shown in the figure as parallel lines, with the center line pointing to the heart, club, or spade. Any face card can be used to "ask" the question, and the four face cards (jack, queen, king, ace) are equivalent. If the detectors are flawless, each measurement is binary: The photon either passes or is blocked by the filter (subsequent measurements on a photon would yield nothing interesting.) The measurement's outcome is represented by an even or odd numbered "answer" card (of the same suit). The numerical value of an answer card is not important: all even numbers (2,4,6,8) are equivalent and represent a photon passing through the filter, while the odd cards (3,5,7,9) represent a photon being blocked.
Although Bell's inequality is easy to prove[9], we avoid it here because the card game reverses roles regarding probability: Instead of the investigators attempting to ascertain the photons' so-called hidden variables, the players are acting as particles attempting to win the game by guessing the measurement angles. Another complication is that the original form of Bell's inequality does not adequately model the partners' version of the game because humans have the freedom to exhibit a behavior not observed by entangled particles (under ideal experimental conditions). This behavior involves a 100% correlation (or anti-correlation) whenever the polarization measurement angles are either parallel or perpendicular to each other. [10] In the partners' version of the card game, this behavior must be enforced by deducting a penalty from the partners' score whenever they are caught using a forbidden strategy (which we shall later call the β-strategy). The minimum required penalty is calculated in Supplementary file:The car and the goats. Fortunately students need not master this calculation because the actual penalty should often be whatever it takes to encourage a strategy that mimics this aspect of entanglement (which we shall call the α-strategy.)
A theoretical understanding of how one can model entanglement using the Schrödinger equation can be found in Supplementary file:Tube entanglement.
The solitaire card game
Bell's card game solitaire.svg
Figure 2 | Solitaire version of game. Cases 1, 2, and 3 represent three possible outcomes if the player chooses the best strategy (later called the "α-strategy": One answer (here, "odd" for ) differs from that given for the other two questions (here, "even" for & .)
Figure 2 shows the three possible outcomes associated with one hand of the solitaire version of the game. The solitaire version requires nine cards. The figure uses a set with three "jacks" ( ♣ ♠) for the questions, and (2,3) for the six (even/odd) answer cards. To play one round of the game, the player first shuffles the three question cards and places them face down so their identity is not known. Next, for each of the three suits, the player selects an even or odd answer card. The figure shows the player choosing the heart and club to be even, while the spade is odd: 2 2♣ 3♠. This is the only viable strategy, since the alternative is to always lose by selecting three answers that are all even or all odd. In the partner's version we shall introduce a second, β-strategy, which is not possible in the solitaire game.
After three answer cards are selected and turned face up, two of the three question cards are randomly selected and also turned face up. Figure 2 depicts all three equally probable outcomes, or ways to select two out of three cards (3 choose 2.)[11] The round is scored by adding or subtracting points, as shown in Table 1: First the suit of each of the two upturned question cards is matched to the corresponding answer card. In case 1 (shown in the figure), the player wins one point because answers are different: is an even number, while ♠ is odd. The player loses three points in case 2 because the and ♣ are the same (even). Case 3 wins one point for the player because the answers are different. It is evident that the player has a 2/3 probability winning a round. The conundrum of Bell's theorem is that entangled particles in an actual experiment manage to win with a probability of 3/4. Table 1 shows that this scoring system causes humans to average a loss of at least 1/3 of a point per round, while entangled particles maintain an average score of zero.[12] How do particles succeed where humans fail?
Table 1: Solitaire Scoring
Points Answers are: Example[13]
+1 different 2 and 3♠
+1 different 2 and 3♣
−3 same 2 and 2♣
The game for entangled partners
In the partners' version, Alice and Bob each play one (even/odd) answer card in response to the suit of a question card. Every round is played in two distinctly different phases. Alice and Bob are allowed to discuss strategy during phase 1 because it simulates the fact that the particles are (effectively) "inside" the parent atom before it emits photons. Then, all communication between the partners must cease during phase 2, which simulates the arrival of the photons at the detectors for measurement under conditions where communication is impossible. In this phase each player silently plays an (even/odd) answer that matches the question's suit. The player cannot know the other's question or answer during phase 2.
In the solitaire version, the player held a deck of six numbered cards and pre-selected (even/odd) answers for each of the three (question) suits. This simulated the parent atom "deciding" the responses that each photon will give to all possible polarization measurements.[14] In an "ideal" Bell's theorem experiment, the two photons' responses to identical polarization measurement angles are either perfectly correlated or perfectly anticorrelated.[8][15] This freedom to independently choose different answers when Alice and Bob are faced with the same question creates a dilemma for the designers of the partners' version of the card game. Adherence to any rule forbidding different answers to the same question cannot always be verified. To enforce this rule, we deduct points whenever they give different answers to the same question. No points are awarded for giving the same answer to the same question. Note how this complexity is relevant to actual experiments because detectors can register false events. The minimum penalty that should be imposed depends on how often the partners are given question cards of the same suit, and is derived at Supplementary file:The car and the goats:
where is the probability that Alice and Bob are asked the same question. The equality holds if and , which can be accomplished by randomly selecting two question cards from nine (K♠, K, K♣, Q♠, Q, Q♣,J♠, J, J♣), as shown in Fig. 3. If the equality in (1) holds, the partners are "neutral" with respect to the selection of two different strategies, one of which risks the 4 point penalty. Both strategies lose, but the loss rate is reduced to −1/4 points per round, because the referee must dilute the number of times different questions are asked.
A sample round begins in the top part of Fig. 3 as phase 1 where the pipe smoking referee has selected different questions (hearts and spades). In a classroom setting, consider allowing Alice and Bob to side-by-side, facing slightly away from each other during phase 2. Arrange for the audience to sit close enough to listen and watch for evidence of surreptitious communication between Alice and Bob. The prospect of cheating not only makes the game more fun, but also allows us to introduce "loopholes". The "thought-bubbles" above the partners show a tentative agreement by the partners to play the same α-strategy introduced in the solitaire version (both say "even" to ♣, and "odd" to ♠.) It is important to allow both players to hold all the answer cards in phase 2 so that each can change his or her mind upon seeing the actual question. The figure shows them following their original plan and winning because the referee selected a heart for Alice and a spade for Bob.
Bell's card game entangled.svg
Figure 3 | One round of the partners' version with Alice and Bob employing the same strategy (α) introduced in the solitaire game. Here, a version of "neutral" scoring is used in which the referee randomly selects from the nine question cards, with a penalty of 4 points assessed if different answers are given to the same question. Instructors might wish to override this "neutral" scoring by asking the same question more often than called for in the random selection.
But the partners have another strategy that might win: Suppose Alice agrees to answer "even" to any question, while Bob answer is always "odd". This wins (+1) if different questions are asked, and loses (−Q) if the same question is asked. This is called the β-strategy. The Supplementary file:The car and the goats establishes that no other strategy is superior to the α and/or β strategies:
α-strategy: Alice and Bob select their answers in advance, in such a way that both give the same answer if asked the same question. For example, they might both agree that ♣ are even, while ♠ is odd. This strategy was ensured in the solitaire version because only three cards are played: If the heart is chosen to be "even", the solitaire version models a situation where both Alice and Bob would answer "even" to "heart". This α-strategy requires that one answer differs from the other two (i.e., all "even" or all "odd" is never a good strategy). The expected loss is 1/3 for each round whenever different questions are asked.
β-strategy: One partner always answers "even" while the other always answers "odd". This strategy gains one point if different questions are asked, and loses points if the same question is asked.
For pedagogical reasons, the instructor may wish to discourage the β-strategy. If Alice and Bob are not asked the same question often, they might choose to risk large losses for the possibility winning just a few rounds using the β-strategy, perhaps terminating the game prematurely with a claim that they lost "quantum entanglement". To counter this, the referee can raise the penalty to six points and randomly shuffle only six question cards that result from the merging of two solitaire decks. We refer to any scoring that favors the players' use of the α-strategy as "biased scoring". To further inhibit use of the β-strategy, the referee should routinely override the shuffle and deliberately select question cards of the same suit. The distinction between biased and neutral scoring lies in whether the equality or the inequality holds in (1). Table 2 shows examples of each scoring system. Both were selected to match an integer value for . The shuffle of 9 face cards exactly matches the equality in (1) if , while the more convenient collection of 6 face cards will bias the players towards the α-strategy if .[16]
Table 2: Examples of neutral and biased scoring
Neutral scoring
Shuffle 9 face cards to ask the same question exactly 25% of the time.
Biased scoring
Shuffle 6 face cards and/or ask the same question with a probability higher than 2/11.
Points Alice and Bob give... Example Points
+1 different answers to different questions "even" to hearts and "odd" to spades +1
−3 the same answer to different questions "even" to clubs and "even" to hearts −3
−4 different answers to the same question "even" to clubs and "odd" to clubs −6
0 the same answer to the same question "even" to clubs (for both players) 0
Cheating at cards and Bell's theorem "loopholes"
In the card game, Alice and Bob could either win by surreptitiously communicating after they see their question cards, or by colluding with the referee to learn the questions in advance. Which seems more plausible, information travelling faster than light, or atoms acting as if they "know" the future? A small poll of undergraduate math and science college students suggests that they are inclined to favor faster-than-light communication as being more plausible. We shall use a space-time diagram to illustrate how faster-than-light communication violates causality by allowing people to send signals to their own past. And, we shall argue that one can make the case that decisions made today by humans regarding how and where to perform a Bell's theorem experiment next week, might be mysteriously connected to the behavior of an obscure atom in a distant galaxy billions of years ago.[17]
The third loophole was a surprise for us. In an early trial of the partners' game, a student[18] stopped playing and attempted to construct a modified version of the α-strategy that uses the new information a player gains upon seeing his or her question card. After convincing ourselves that no superior strategy exists, we realized that a player could cheat by terminating the game after seeing his or her own question card, but before playing the answer card. This is related to an important detector efficiency loophole.[19] The student's discovery also alerted us to the fact that our original calculation of (1) was just a lucky guess based on flawed logic.
Magic phones: Communications loophole
Alice and Bob could win every round of the partners' version if they cheat by communicating with each other after seeing their question cards in phase 2. In an actual experiment, this loophole is closed by making the measurements far apart in space and nearly simultaneous, which in effect requires that these communications travel faster than the speed of light.[20] While any faster-than-light communication is inconsistent with special relativity, we shall limit our discussion to information that travels at nearly infinite speed.[21]
Instantaneous communication Minkowskilike.svg
Figure 4 | "Magic phone#1" is situated on a moving train and can be used by Alice to send a message to Bob's past, which Bob relays back to Alice's past using the land-based "Magic phone #2". These magic phones transmit information with near infinite speed.
Figure 4 shows Alice and Bob slightly more than one light-year apart. The dotted world lines for each is vertical, indicating that they remain at rest for over a year. The slopes of world lines of the train's front and rear are roughly 3 years per light-year, corresponding to about 1/3 the speed of light. Both train images are a bit confusing because it is difficult to represent a moving train on a space-time diagram: A moving train can be defined by the location of each end at any given instant in time. This requires the concept of simultaneity, which is perceived differently in another reference frame. The horizontal image of the train at the bottom represents to location of each car on the train on the first day of January, as time and simultaneity are perceived by Alice and Bob. To complicate matters, the horizontal train image is not what they would actually see due to the finite transit time required for light to reach their eyes. It helps to imagine a distant observer situated on a perpendicular to some point on the train. The transit time for light to reach this distant observer will be nearly the same for every car on the train. Many years later, this distant observer will see the horizontal train as depicted at the bottom of the figure. It will be instructive to return to the perspective of this distant observer after the paradox has been constructed.
The slanted image of the train depicts the location of each car on the day that the (moving) passengers perceive the front to be adjacent to Alice, at the same time that the train's rear is perceived to be adjacent to Bob. It should be noted that Alice and Bob do not perceive these two events as simultaneous. The figure shows that the rear passes Bob several months before the front passes Alice (in the partners' reference frame.)
Now we establish that the passengers perceive the front of the train to reach Alice at the same time that the rear reaches Bob. The light-emitting-diode (LED) shown at the bottom of Fig. 4 emits two pulses from the center of the train in January. It is irrelevant whether the LED is stationary or moving because all observers will see the pulses travelling in opposite directions at the speed of light (±1 ly/yr.) Note how the backward moving pulses reaches the rear of the train in May, five months before the other pulse reaches the train's front in October. But, the passengers see two light pulses created at the center of the train, directed at each end of the train, and will therefore perceive the two pulses as striking simultaneously.
To create the causality paradox, we require two "magic-phones" capable of sending messages with nearly infinite speed. Unicorn icons use arrows to depict the information's direction of travel: magic phone #1 transmits from Alice to Bob, while #2 transmits from Bob to Alice. Magic phone #1 is situated on the moving train. When Alice shows her message through the front window as the train passes her in October, a passenger inside relays the message via magic phone #1 to the train's rear, where Bob can see it through a window. Bob immediately relays the message back to Alice via the land-based magic phone #2 in May, five months before she sent it.
Our distant observer will likely take a skeptical view of all this. The slope of the slanted train's image indicates that the distant observer will see magic phone #1 sending information from Bob to Alice, opposite to what the passengers perceive. The distant observer will first see the message inside the rear of the train (when it was adjacent to Bob in May). That message will immediately begin to travel towards of Alice, faster than the speed of light, but slow enough so that Alice will not receive the it until October. Meanwhile, Bob sends the same message via land-based phone #2 to Alice, who receives it in May. Alice waits for almost five months, until she prepares to send the same message, showing it through the front window just before the message also arrives at the front via the train-based magic phone #1. It would appear to the distant observer that the events depicted in Fig. 4 had been artificially staged.
This communications loophole in an actual Bell test experiment was closed by arranging for the measurements to coincide so that any successful effort to communicate would suggest that humans could change their own past using this ability to send information faster than light.
Referee collusion: Determinism loophole
Bell's theorem superdetermism cards.svg
Figure 5 | Cosmic photons from two distant spiral galaxies arrive on Earth with properties that trigger the filters to ask the & questions of photons just prior to their arrival with a winning combination of (even/odd) answers.
This "determinism", or "freedom-of-choice" loophole involves the ability of the quantum system to predict the future. Curiously, the strategy would not be called "cheating" in the card game if Alice or Bob relied on intuition to guess which cards the referee will play in the upcoming round. But what makes this loophole bizarre when applied to a Bell test experiment is that it would have been necessary to predict the circumstances under which the experiment was designed and constructed by human beings who evolved on a plant that was formed almost five billion years ago. On the other hand, viewing the parent atom, the two photons, and the detectors as one integrated quantum entity is consistent with the proper modeling of a quantum-mechanical system. The paradoxical violation of Bell's inequality arises from the need to model two remote particles as one system, so it is not unreasonable to assume that the conundrum can be resolved by including the devices that make the measurements into that model.
Figure 5 is inspired by a comment made by Bell during a 1985 radio interview that mentioned something he called "superdeterminism". [22][23] It is a timeline that depicts the big bang, beginning at a time when space and time were too confusing for us to graph. At this beginning, "instructions" were established that would dictate the entire future of the universe, from every action taken by every human being, to the energy, path, and polarization of every photon that will ever exist. Long ago, obscure atoms in two distant galaxies (Sb and Sc) were instructed to each emit what will become "cosmic photons" that strike Earth. Meanwhile, "instructions" will call for humans to evolve on Earth and create a Bell's theorem experiment that uses the frequency and/or polarization of cosmic photons to set the polarization measurement angles while the entangled photons Alice and Bob are still en route to the detectors. Alice and Bob will arrive at their destinations already "knowing" how to respond because the cosmic photons were "instructed" to have properties that cause the questions to be "heart" and "spade".
Viewed this way, the events depicted in Fig. 5 are just the way things happen to turn out. Efforts to enact the scenario with an actual experiment using cosmic photons in this way are being carried out. The most recent experiment looks back at photons created 600 years ago.[24][25] Note also how this experiment does not "close" the loophole, but instead greatly expands the scale of any "collusion" between the parent atom and detectors.
It is claimed that the results of Bell test experiments do not contradict special relativity, despite what may appear to some as faster-than-light "communication" between Alice and Bob.[26] Figure 5 can help us visualize this if the "instructions" represent the time evolution of an exotic version of Schrödinger's equation for the entire universe. If this wave equation is deterministic, future evolution of all probability amplitudes is predetermined. One flaw in this argument is that it relies on an equation that governs the entire universe, and for that reason is not likely to be solved or written down. Perhaps this is why the paradox seems to have no satisfactory resolution.
The Rimstock cheat: Detector error loophole
Bell's rimstock cards transposed.svg
Figure 6 | The Rimstock cheat: Bob flips a coin to determine whether to play the cheat on this round. Alice will play "even" to hearts and "odd" to spades or clubs.
Rimstock spaghetti Bell's theorem 50.svg
Figure 7 | Four teams of players engaging in the detector error cheat. Each connected dot represents a hand in which different questions were asked, and the horizontal dots simulate a detector error that coincided with a player receiving an unfavorable question.
The following variation of the α-strategy allows the team to match the performance of entangled particles by achieving an average score of zero: Alice preselects three answers and informs Bob of her decision. Bob will either answer in the same fashion, or he might abruptly stop the hand upon seeing his question card, perhaps requesting that the team take a brief break while another pair of students play the role of Alice and Bob. In a card game, this request to stop and replay a hand would require the cooperation of a gullible scorekeeper. But no detector in an actual Bell's theorem experiment is 100% efficient, and this complicates the analysis of a Bell's theorem experiment in a way that requires both careful calibration of the detector's efficiency, as well as detailed mathematical analysis.
Since this strategy never calls for Alice and Bob to give different (even/odd) answers to the same question, we may consider only rounds where the players get different questions. To understand why Bob might refuse to play a card, suppose Alice plans to answer "even" to hearts and "odd" to clubs and spades. As indicated at the top of Fig. 6, Bob the heart is the "desired" suit because he knows they will win if he sees that question. But their chances of wining are reduced to only 50% if Bob sees the "undesired" club or spade. To avoid raising suspicion, Bob does not stop the game each time he sees an unfavorable question. Instead, he stops with a 50% probability upon seeing an unfavorable card. To calculate the average score, we construct a probability space consisting of equally probable outcomes, beginning with the three possible suits that Bob might see. We quadruple the size of this probability space (from 3 to 12) by treating the following two pairs of events as independent, and occurring with equal probability:
1. Bob will either stop the hand, or play round (Do stop or Don't stop.)
2. After seeing his question, Bob knows that Alice might receive one of only two possible questions (ignoring rounds with the same question asked of both.)
Figure 6 can be used to show that Bob will stop the game with a probability of 1/3.[27] But if Bob and Alice randomly share this role of stopping the game, each player will stop a given round with only 1/6, yielding an apparent detector efficiency of 5/6 = 83.3%.[19] Typical results for a team playing this ruse are illustrated in Fig. 7. Ten rounds are played on four different occasions. The vertical axis represents in the team's net score, with upward steps corresponding to winning one point, and downward corresponding to losing three points. The horizontal lines showing no change in score indicate occasions where Bob or Alice refused to play an answer card (it was never necessary to ask both partners the same question in this simulation.)
Figure 6 was generated using an Excel spreadsheet using the rand() function, which caused the graphs to change at every ctrl+= keystroke. It took several strokes to get a graph where the lines did not cross, and all the event counts were this close to expected values. As discussed in a supplement, an Excel verification lab is an appropriate activity in a wide variety of STEM courses.
Pedagogical issues
To make sixteen solitaire decks, purchase three identical standard 52 card decks. Remove only one suit (hearts, clubs, spades) from each deck to create four solitaire sets. Each group should contain 3-5 people, and two solitaire decks (for "biased" scoring with Q=6.) To avoid confusion of an ace (question card) with an (even/odd) answer card, reserve the ace for groups with large even/odd number cards. For example, one group might have solitaire sets with (ace,8,9) and (king, 6,7). In a small classroom, the entire audience can observe or even give advice to one pair playing the partners' version at the front of the room. Placing the question cards adjacent to the players at the start will permit the instructor and entire class to join the partners' discussion regarding strategy during phase 1. For "neutral" scoring with Q=4, the instructor can either borrow question cards from the class, or convert unused "10" cards into questions. Since cheating will come so naturally, this game is not suitable for gambling (even for pennies).
Bell's theorem can lead to topics ranging from baseless pseudoscience to legitimate (but pedagogically unnecessary) speculation regarding alternatives to the theory of quantum mechanics. While few physicists are experts in such topics, all teachers will eventually face such issues in the classroom. The authors of this paper claim no expertise in any of this, and the intent is to illustrate the "spookiness" of Bell's theorem, show how one can use simple logic to prove that faster-than-light communication violates special relativity,[21] and introduce students to the concept of a "deterministic" theory or model.[26]
Additional information
Supplementary material
• Supplementary file 1 | The car and the goats (OOjs UI icon download-ltr.svg Download) A rigorous proof of the penalty that yields "neutral scoring".
• Supplementary file 2 | Impossible correlations (OOjs UI icon download-ltr.svg Download) Extends Bell's inequality to non-symmetric cases and also proves the CHSH inequality (without using calculus).
• Supplementary file 3 | Tube entanglement (OOjs UI icon download-ltr.svg Download) Describes a simple analog to entanglement with polarized photons. It relies on Maluss' Law, and also introduces Dirac notation as a shorthand representation for the wavefunction of two non-interacting massive particles confined to a narrow tube.
Versions of this manuscript have received five referee reports (it was first submitted to the American Journal of Physics.) It is obvious that each referee was highly qualified, and that each exerted a considerable effort to improve the quality of this paper.
Competing interests
Guy Vandegrift is a member of the WikiJournal of Science editorial board.
1. Bell, John S. (1964). "On the Einstein Podolsky Rosen Paradox". Physics 1 (3): 195–200. doi:10.1103/physicsphysiquefizika.1.195.
2. Vandegrift, Guy (1995). "Bell's theorem and psychic phenomena". The Philosophical Quarterly 45 (181): 471–476. doi:10.2307/2220310.
3. See for example this discussion on the Wikipedia article's talk page, or Wikipedia's effort to clarify this with w:No-go theorem
4. Mermin, N. David (1981). "Bringing home the atomic world: Quantum mysteries for anybody". American Journal of Physics 49 (10): 940–943. doi:10.1119/1.12594. "(referring to those who do not consider this a conundrum) In one sense they are obviously right. Compare Tovey's remark that (Beethoven's) Waldstein Sonata has no more business than sunsets and sunrises to be paradoxical."
5. or detectors
6. These (hypothetical) theories are called "hidden variable" theories Larsson, Jan-Åke (2014). "Loopholes in Bell inequality tests of local realism". Journal of Physics A: Mathematical and Theoretical 47 (42): 424003. doi:10.1088/1751-8113/47/42/424003.
7. In w:special:permalink/829073568 these questions are associated with the "communication (locality)", the "free choice" and a "fair sampling" loophole, respectively.
8. 8.0 8.1 In most experiments electro-optical modulators are used instead of polarizing filters, and often it is necessary to rotate one set of orientations by 90°. Giustina, Marissa; Versteegh, Marijn A. M.; Wengerowsky, Sören; Handsteiner, Johannes; Hochrainer, Armin; Phelan, Kevin; Steinlechner, Fabian; Kofler, Johannes et al. (2015). "Significant-Loophole-Free Test of Bell’s Theorem with Entangled Photons". Physical Review Letters 115 (25): 250401. doi:10.1103/physrevlett.115.250401.
9. Maccone, Lorenzo (2013). "A simple proof of Bell's inequality". American Journal of Physics 81 (11): 854–859. doi:10.1119/1.4823600.
10. See equation 29 in Aspect, Alain (2002). "Bell's theorem: the naive view of an experimentalist". In Bertlmann, Reinhold A.; Zeilinger, Anton (eds.). Quantum [un] speakables (PDF). Berlin: Springer. p. 119-153. doi:10.1007/978-3-662-05032-3_9.
11. or "n choose k" is defined in w:Binomial coefficient
12. The player can lose more than 1/3 of a point per round by adopting the obviously bad strategy of making all three answers the same (all even or all odd.) This is closely related to the fact that Bell's "inequality" is not Bell's "equation".
13. Since 3-choose-2 equals 6, three other cases exist; all involve 3.
14. Keep in mind that it seems artificial for the parent atom to "know" that these photons are part of an experiment involving just three possible polarization measurements. This need to somehow orchestrate all possible fates for each emitted photon created the EPR conundrum long before Bell's inequality was discovered. See w:EPR paradox.
15. It is best not to assume that this correlations implies that the "decision" regarding polarization was actually made as the two photons are created by the parent atom. In physics, mathematical models should be judged by whether they yield predictions that can be verified by experiment, not whether these models make any sense.
16. Equation (1) shows that the case is neutral at .
17. One can also make the case that it is not the role of physics (or science) to speculate in such matters.
18. User:Rimstock
19. 19.0 19.1 Garg, Anupam; Mermin, N. David (1987). "Detector inefficiencies in the Einstein-Podolsky-Rosen experiment". Physical Review D 35 (12): 3831–3835. doi:10.1103/physrevd.35.3831.
20. Aspect, Alain; Dalibard, Jean; Roger, Gérard (1982). "Experimental Test of Bell's Inequalities Using Time- Varying Analyzers". Physical Review Letters 49 (25): 1804–1807. doi:10.1103/physrevlett.49.1804.
21. 21.0 21.1 Liberati, Stefano; Sonego, Sebastiano; Visser, Matt (2002). "Faster-than-c Signals, Special Relativity, and Causality". Annals of Physics 298 (1): 167–185. doi:10.1006/aphy.2002.6233.
22. Bell, John S. (2004). "Introduction to hidden-variable question". Speakable and unspeakable in quantum mechanics: Collected papers on quantum philosophy. Cambridge University Press. pp. 29–39. doi:10.1017/cbo9780511815676.006.
23. Kleppe, A. (2011). "Fundamental Nonlocality: What Comes Beyond the Standard Models". Bled Workshops in Physics. 12. pp. 103–111. In that interview, Bell was apparently speculating about a deterministic "hidden variable theory where all outcomes are highly dependent on initial conditions.
24. Gallicchio, Jason; Friedman, Andrew S.; Kaiser, David I. (2014). "Testing Bell’s Inequality with Cosmic Photons: Closing the Setting-Independence Loophole". Physical Review Letters 112 (11): 110405. doi:10.1103/physrevlett.112.110405.
25. Handsteiner, Johannes; Friedman, Andrew S.; Rauch, Dominik; Gallicchio, Jason; Liu, Bo; Hosp, Hannes; Kofler, Johannes; Bricher, David et al. (2017). "Cosmic Bell Test: Measurement Settings from Milky Way Stars". Physical Review Letters 118 (6): 060401. doi:10.1103/PhysRevLett.118.060401.
26. 26.0 26.1 See also Ballentine, Leslie E.; Jarrett, Jon P. (1987). "Bell's theorem: Does quantum mechanics contradict relativity?". American Journal of Physics 55 (8): 696–701. doi:10.1119/1.15059.
27. 2/3 is the product of the probability of receiving an unfavorable card, and 1/2 is the probability of stopping; hence (2/3)(1/2)=1/3 |
fe1370e74e18a311 | I'm having trouble doing it. I know so far that if we have two Hermitian operators $A$ and $B$ that do not commute, and suppose we wish to find the quantum mechanical Hermitian operator for the product $AB$, then
However, if I have to find an operator equivalent for the radial component of momentum, I am puzzled. It does not come out to be simply
where $\vec{r}$ and $\vec{p}$ are the position and the momentum operator, respectively. Where am I wrong in understanding this?
You would have to use the fact that the momentum operator in position space is $\vec{p} = -i\hbar\vec{\nabla}$ and use the definition of the gradient operator in spherical coordinates:
$$\vec{\nabla} = \hat{r}\frac{\partial}{\partial r} + \hat{\theta}\frac{1}{r}\frac{\partial}{\partial\theta} + \hat{\phi}\frac{1}{r\sin\theta}\frac{\partial}{\partial\phi}$$
So the radial component of momentum is
$$p_r = -i\hbar\hat{r}\frac{\partial}{\partial r}$$
However: after a bit of investigation prompted by the comments, I found that in practice this is not used very much. It's more useful to have an operator $p_r'$ that satisfies
$$-\frac{\hbar^2}{2m}\nabla^2 R(r) = \frac{p_r'^2}{2m} R(r)$$
This lets you write the radial component of the time-independent Schrödinger equation as
$$\biggl(\frac{p_r'^2}{2m} + V(r)\biggr)R(r) = E R(r)$$
The action of the radial component of the Laplacian in 3D is
$$\nabla^2 R(r) = \frac{1}{r^2}\frac{\partial}{\partial r}\biggl(r^2\frac{\partial R(r)}{\partial r}\biggr)$$
and if you solve for the operator $p'_r$ that satisfies the definition above, you wind up with
$$p'_r = -i\hbar\biggl(\frac{\partial}{\partial r} + \frac{1}{r}\biggr)$$
This is called the "radial momentum operator." Strictly speaking, it is different from the "radial component of the momentum operator," which is, by definition, $p_r$ as I wrote it above, although I wouldn't be surprised to find people mixing up the terminology relatively often.
I was able to figure it out, so here goes the clarification for the record.
Classically $$p_r=\hat{D}_r = \frac{\hat{r}}{r} \cdot\hat{p} = \frac{\hbar}{i}\frac{\partial}{\partial r}$$
However $\hat{D}_r$ is not hermitian. Consider the adjoint $$\hat{D}_r^\dagger= \hat{p}\cdot\frac{\hat{r}}{r} =\frac{\hbar}{i} \left ( \frac{\partial}{\partial r}+\frac{2}{r} \right ) $$
Now we know from linear algebra how to construct a hermitian operator from an operator and its adjoint: $$\hat{p}_r = \frac{\hat{D}_r^\dagger+\hat{D}_r}{2}=\frac{\hbar}{i} \left ( \frac{\partial}{\partial r}+\frac{1}{r} \right ) $$
And btw, for those who followed my initial question, don't do the following calculational mistake that I committed:
$$\hat{p}_r = \frac{1}{2}\left(\frac{1}{r}(\vec{r}\cdot\vec{p})+(\vec{p}\cdot\vec{r})\frac{1}{r}\right)\neq \frac{1}{2}\frac{1}{r}(\vec{r}\cdot\vec{p}+\vec{p}\cdot\vec{r})$$
• $\begingroup$ Dear @yayu: So your initial guess in the question is correct after all! $\endgroup$ – Qmechanic May 2 '11 at 20:55
• $\begingroup$ @Qmechanic yes, but I'd thought that this was an ad hoc choice and missed the notion of hermiticity which actually guides the construction. $\endgroup$ – yayu May 7 '11 at 6:02
• $\begingroup$ @Qmechanic Also thanks, the same notation for operator and unit vectors can be regarded as one of the culprits for my initial mistake in calculation. (i.e making the final inequality an equality) $\endgroup$ – yayu May 7 '11 at 6:05
Even in one dimension the operator $p_r=-i\partial_r$ on the half line $r>0$ has deficiency indices $(0,1)$. There is thus no way to define it it as a self-adjoint operator. In practical terms this abstract mathematical statement means that there is no set of boundary conditions thta we can impose on the wavefunction $\psi(r)$ that lead to a complete set of eigenfunctions for $p_r$. For example, integration by parts to prove formal hermiticity requires that $\psi(0)=0$ but all potential eigenfunctions are of the form $\psi_k(r)=\exp\{ikr\}$ for some real or complex $k$, and no value of $k$ can make $\psi_k(0)$ be zero.
Since one needs eigenfunctions and eignvalues to assign a probability to the outcome of a measurement of $p_r$ this means that $p_r$ is not an ``observable.''
• 4
$\begingroup$ You are just explaining why $\mathrm{i}\partial_r$ isn't Hermitian. The question asks how to construct the correct observable corresponding to radial momentum, not why the naive guess doesn't work. $\endgroup$ – ACuriousMind Nov 5 '15 at 16:02
1D is indeed not good one for radial operator (and if one uses it then it corresponds to half line with reflection boundary condition at x=0, and then the boundary term vanishes there so again h/i*d/dr is hermitian). But on higher dimensions, e.g. 3, one need to think of inner product with $r^2$ weight function, so it cancels the the boundary term. Of course this gives another term when doing integration by parts, and that is why $d/dr$ alone if not enough for hermiticity.
Of course this corresponds also to the fact that "plane wave" in radial coordinate has some function of $r$ in the denominator, because of energy conservation, e.g. $e^{ikr}/r$ in 3D.
Also see this nice paper, which describes the failure of this opproach in 2D.
protected by AccidentalFourierTransform Nov 22 '18 at 17:36
Would you like to answer one of these unanswered questions instead?
|
7e19e3b8973ef1b7 | Category Archives: Academia
A somewhat coherent post on a robust idea
The word “coherence” has different meaning for different people. Most people may think of the notion of being logical and consistent, be it in speaking or in acting. Actually, we all hope to deal with people — especially politicians(!) — who exhibit coherence between what they say and what they do. And we all hope that the next major blockbuster movie is coherent, with no major plot holes that make you grind your teeth in your seat, unable to fully enjoy your popcorn.
Nonetheless, to a physicist, coherence is also a notion associated with wave behaviour. More precisely, it is associated with the possibility of seeing the effects of superposition, which is the coherent(!) combination of different physical possibilities. For example, the superposition of sounds waves is what allows people to listen to music in the background, while pleasantly chatting.
Among the effects of superposition more affected by coherence (or lack thereof) are phenomena of interference, be it constructive or destructive, like the ones that you can experience with noise-cancelling headphones, for sound, or by looking at the colours of a soap bubble, for light. The recent detection of gravitational waves was possible exactly using the fact that light is a wave, and as such it can be used to detect tiny variations in length within an “interferometer”. Without coherence, neither constructive nor destructive interference would be possible, because both kinds of interference would be “washed out” and inexistent in practice.
The importance of coherence becomes enormous, both conceptually and practically, when we realize that in quantum mechanics everything is also a wave, including what would normally — or, rather, “classically” — be considered “particles”, like electrons and atoms. Mathematically speaking, what we do is to associate a wave — the wave-function — to any physical system or compound of physical systems, more precisely to the state of the system. The evolution in time of the state of the object is given by the evolution of such a wave, described by the famous Schrödinger equation. Then, predictions of what one can observe, and with what probability, can be computed from the knowledge of the wave at a given time.
In the case of information, this wave-like property of objects leads to the consideration of the quantum bit, or qubit, where one can have the superposition of the standard values assumed by a bit, 0 and 1. While in the classical realm the latter would be considered alternative and mutually exclusive options, they can coexist — in the sense of superposition — in the quantum case. This is at the basis of the computational power of future quantum computers.
A coherent superposition is like a controlled combination of ingredients.
In a more realistic setting, and taking into account issues like ignorance(!), the (unwanted) interactions with an environment, and all kinds of “noise”, the state of an object is associated not with a wave, but rather with a so-called density matrix. The latter can be thought of as the incoherent combination of several waves, leading to the decrease and potentially disappearance of interference. One could compare coherent and incoherent mixing respectively to, on one hand, expert cooking, where many flavours combine nicely, either reinforcing or contrasting each other, and, on the other hand, blending everything in a mixer, making often a tasteless combination out of even the most delicious ingredients.
An incoherent mixture of foods may lead to a tasteless result; so can the incoherent mixture of waves and quantum states states.
An incoherent mixture of foods may lead to a tasteless result; so can the incoherent mixture of waves, or of quantum states. (Photo: Tim Patterson (CC BY-SA 2.0))
In the density matrix formalism, (the surviving) coherence is often equated with the presence of off-diagonal elements in the matrix representation of a quantum state. Such off-diagonal elements are the “fingerprint” of the quantum superposition of the (classically) mutually exclusive properties associated with the basis in which the matrix is written; the latter, although in principle arbitrary, is typically singled-out by the physics, for example by the consideration of what are the various possible energy states of the system. Most importantly, interesting effects — like oscillations — can occur when, and only when, there are off-diagonal elements in the energy representation of a quantum state.
Somewhat surprisingly, the purposeful and focused study of coherence in the matrix formalism has been initiated only recently, leading to an explosion of interest and of works on the topic. Researchers are trying to develop a full consistent theory of coherence, which can be considered like a resource to be characterized, quantified, and manipulated.
In [C. Napoli et al., Phys. Rev. Lett. 116, 150502 (2016)], together with collaborators from the University of Nottingham in UK and the Mount Allison University in New Brunswick, Canada, I put forward a quantifier of coherence, the robustness of coherence, that has many appealing properties, including the possibility of efficiently calculating it when the density matrix is known, of directly measuring it in the lab, and of associating it with practical tasks. Indeed, we find that the robustness of coherence of a quantum state sets an ultimate limit for usefulness of the involved physical system for metrological tasks.
We prove that the robustness of coherence and the robustness of asymmetry quantify the usefulness of the corresponding quantum state for the sake of metrological tasks, like establishing which particular rotation among a set of possible rotations was actually applied.
In the companion paper [M. Piani et al., Phys. Rev. A 93, 042107] we expand on these ideas, using the fact that coherence, despite being such a fundamental concept, can also be seen as “just” a special case of “asymmetry”, a word that may also mean different things to different people. Nonetheless, in this case, it is easy to grasp that the asymmetry of an object is associated with how different it looks when, let us say, we rotate it or flip it. It should be clear that a sphere is a very symmetric object; for example, it looks the same from whatever direction we look at it, e.g., even if we look at it while standing on our hands, rather than on our feet. On the other hand, say, a face, albeit typically symmetric with respect to a left-right flip, is not symmetric with respect to an upside-down flip. This means that we can realize that we are standing on our hands by noticing that the faces of the bystanders around us are upside-down themselves — this even disregarding the puzzlement or amusement that could transpire from the same faces.
In [M. Piani et al., Phys. Rev. A 93, 042107] we introduce the robustness of asymmetry as a quantifier of the asymmetry of a quantum state with respect to a set of transformations that form a group; that means, in particular, with respect to a set of transformations such that, if you combine two transformations, one followed by the other, you obtain again a transformation that is part of the group, and such that any transformation can be undone by another transformation in the group. Again, think of rotations of an object, and of how they can be combined and undone. We prove that also the robustness of asymmetry of a quantum state can be easily calculated, that it can be measured directly experimentally, and that it sets an ultimate limit to the usefulness of the system prepared in said state for the sake of telling apart the transformations of the group — another metrological task.
You might still wonder where the name “robustness” comes from. Well, it comes from the fact that the property of interest — coherence, or asymmetry — is quantified by the noise that it takes to destroy it; that is, literally, by how robust it is. What our works point out is that this already operational interpretation of the quantifier is precisely associated with how useful the coherence or asymmetry present in the quantum system are. That is, independently of whether you have a positive attitude (“what is the best use I can make of the resource?”) or you’d rather prepare for the worse (“how much noise can our system tolerate?”), robustness is your answer.
[This post is cross-posted on Quanta Rei]
On an alternative system to evaluate scientific contributions
The ideas below are not necessarily original (for example, I have been inspired by posts and related discussions as this one and its follow-up), and I have never taken any real action to see whether they could be tweaked and somehow implemented. But I am also sure that ideas that are not shared have no hope to change things. And it is better to have at least some little hope 🙂 So, here we go.
A scientist could be associated with two numbers, similar to Google’s PageRank:
– an AuthorRank
– a ReviewerRank.
These two numbers would reflect the reputation (value?) of the researcher in the two major activities/roles of a scientist: that of producing new and interesting results, and that of judging/checking/validating the results of others. These numbers would be calculated also adopting an algorithm similar to PageRank (see below).
Each scientist should have an account with two corresponding modes: Author and Reviewer. The first would be associated with the real name of the scientist, while the second would allow the scientist to act anonymously. Anyone could open an account, but the Reviewer mode would be activated only upon referral from an official institution (university?) or after having built enough AuthorRank. This would reduce the risk of people polluting the system with bad behavior in Reviewer mode, and of accounts opened just to rig the system.
Each “published” (“arXived”?) paper should be open for discussion (commenting, suggestions, etc.) and for voting. Voting would be given by scientists in their Reviewer (anonymous) mode, with only the ReviewerRank displayed and having an effect (although the Author mode would have an effect indirectly; see later). The vote casted by a Reviewer with higher ReviewerRank should count more than the vote casted by a Reviewer with a low ReviewerRank (in this sense the system is PageRank inspired). In principle one could even keep track separately (besides with the total count) of the votes coming from people with high ReviewerRank (much in the similar way in which in Rottentomatoes one can check the rate of the “top critics”).
The AuthorRank would (should?) influence the ReviewerRank by adding to it. The rationale is that if one is a good author, he/she is probably able to judge properly the works of others, even if he/she does not dedicate much time to reviewing and to building the ReviewerRank with an intense reviewing activity.
The researcher would take part in the discussion on his/her article in his/her Author mode. His AuthorRank would increase thanks to the votes given to the article and potentially to the votes given to the activity of the author in the discussion on the author’s paper (e.g., replying effectively to the comments/questions of the Reviewers). The AuthorRank would also increase with citations of his/her paper by other papers. As in the calculation of PageRank, this increase would depend on the AuthorRank of the authors of the citing paper. The point is to make the quality of the citations at least as important as the number of the citations. The ReviewerRank of a Reviewer would increase thanks to the votes of both the Authors and the other Reviewers for constructive feedback, good comments, helpful suggestions.
There could be tags associated to papers to indicate the fields and subfields of research: one could then even end up with Author and Reviewer ranks in each subfield, depending on the votes associated to both the uploads (papers published) and the discussions in a particular field. This would make more objective saying “this person is a leader in this field but also an expert in this other field”.
As a result of this system, a researcher would be associated with his/her Author and Reviewer ranks, possibly (sub)split by field/subfield. Also, each paper in the list of papers would have an associated score. Committees evaluating a candidate for a job should then be able to get a good sense of the ability of the person in a given field/subfield, as well as of his/her contribution to the community through his/her referee activity. |
a7853627b870b6c7 | Saturday, May 5, 2007
George Tenet's Evasions
In October 2002, as Congress was about to vote to grant George W. Bush the authority to invade Iraq, Democratic Senator Carl Levin forced the CIA to declassify intelligence analyses indicating it was unlikely that Saddam Hussein would strike the United States with unconventional weapons or share such arms with anti-American terrorists. But CIA director George Tenet undermined the political punch of this disclosure by stating publicly that the CIA's view was not inconsistent with Bush's claim that Iraq posed an immediate and direct WMD threat. Moreover, Tenet's CIA at the same time revealed there had been "senior level contacts" between Iraq and Al Qaeda. This suggested that Saddam was in league with the 9/11 murderers. Unmentioned was that CIA analysts had concluded there was no evidence of significant ties between Baghdad and Osama bin Laden. With this misleading disclosure, Tenet helped Bush grease the way to war.
As Tenet recounts this episode in his new book, At the Center of the Storm, he concedes he was wrong to have bent to a request from National Security Adviser Condoleezza Rice to issue this public comment. But he neglects to mention release of the intelligence that appeared to link Saddam to Al Qaeda. Here's the Tenet formula in a nutshell: Accept some blame while blaming others and sidestepping inconvenient matters.
Tenet acknowledges that the CIA failed to act on pre-9/11 leads and botched the WMD issue. But he ducks a critical charge: He and his agency disregarded warning signs about the WMD intelligence that was being oversold by the White House. For instance, Tenet downplays questions raised within the agency about the credibility of Curveball--the fabricator who claimed Iraq had mobile bioweapons labs--and ignores previously disclosed e-mails that document this internal debate.
It's hard to tell the unvarnished truth about oneself (and one's agency); it's easier to do so about others. And Tenet does unload. He notes that the Administration never seriously considered options other than full-scale war on Iraq and that it did not ponder the implications of "what would come next." Its prewar WMD rhetoric, he writes, "seemed considerably ahead of the intelligence we had been gathering," and the Administration ignored prewar assessments of troubles that could arise following an invasion. Tenet also shows how calamitous postinvasion decisions were not rigorously discussed before being implemented.
This is a profound indictment, but Tenet never faults Bush. He aims mostly at Vice President Cheney and the neocons. Cheney, Tenet notes, made claims about Iraq's WMDs that were not even supported by the CIA's overstated intelligence. On one occasion Tenet had to kill an over-the-top Cheney speech on the unproven Al Qaeda-Iraq connection. Before and after the invasion, the neocons relentlessly promoted Ahmad Chalabi, a controversial Iraqi exile and schemer, as the solution to whatever the problem of the moment was. Condoleezza Rice was "remote," did little in response to pre-9/11 warnings and failed to broker Iraq policy in an effective manner. Defense Secretary Rumsfeld was out of touch with reality on Iraq. Tenet also slams David Kay, whom Tenet hired to lead the WMD search in postinvasion Iraq. Kay's crime? Not leaving his post quietly but instead telling the press and Congress that there were no WMDs to be found. In all this, Bush, inexplicably, is blameless.
Tenet's book is gripping (thanks to co-author Bill Harlow, a novelist and former CIA spokesman). But it's self-serving and partial. Tenet never holds himself fully accountable for his prime mistake: not paying sufficient attention to prewar questions about both the WMD intelligence and the postinvasion planning and not speaking out about his and the Administration's multiple failures until now (when he is making millions of dollars with his mea-quasi-culpa). All his explaining and blaming cannot cover up an undeniable fact: Tenet was not merely at the center of a disastrous storm; he helped to create it.
capt said...
Mr. David Corn,
"Tenet was not merely at the center of a disastrous storm; he helped to create it."
Good point.
capt said...
Poll: Bush Hits All-Time Low
George W. Bush has the lowest presidential approval rating in a generation, and the leading Dems beat every major '08 Republican. Coincidence
May 5, 2007 - It's hard to say which is worse news for Republicans: that George W. Bush now has the worst approval rating of an American president in a generation, or that he seems to be dragging every '08 Republican presidential candidate down with him. But According to the new NEWSWEEK Poll, the public's approval of Bush has sunk to 28 percent, an all-time low for this president in our poll, and a point lower than Gallup recorded for his father at Bush Sr.'s nadir. The last president to be this unpopular was Jimmy Carter who also scored a 28 percent approval in 1979. This remarkably low rating seems to be casting a dark shadow over the GOP's chances for victory in '08. The NEWSWEEK Poll finds each of the leading Democratic contenders beating the Republican frontrunners in head-to-head matchups.
Perhaps that explains why Republican candidates, participating in their first major debate this week, mentioned Bush's name only once, but Ronald Reagan's 19 times. (The debate was held at Reagan's presidential library.)
When the NEWSWEEK Poll asked more than 1,000 adults on Wednesday and Thursday night (before and during the GOP debate) which president showed the greatest political courage - meaning being brave enough to make the right decisions for the country, even if it jeopardized his popularity - more respondents volunteered Ronald Reagan and Bill Clinton (18 percent each) than any other president. Fourteen percent of adults named John F. Kennedy and 10 percent said Abraham Lincoln. Only four percent mentioned George W. Bush. (Then again, only five percent volunteered Franklin Roosevelt and only three percent said George Washington.)
A majority of Americans believe Bush is not politically courageous: 55 percent vs. 40 percent. And nearly two out of three Americans (62 percent) believe his recent actions in Iraq show he is "stubborn and unwilling to admit his mistakes," compared to 30 percent who say Bush's actions demonstrate that he is "willing to take political risks to do what's right."
Here is a sincere question:
Does anybody think it is a good idea to have one party rule? As in the president and both houses of congress owning a majority be it D or R?
David B. Benson said...
I don't know why Jimmy Carter was so unpopular. I thought he did a decent job.
I do know why George XLIII is so unpopular...
capt said...
'Globe' Asks, With Photos: What Else Would Money Spent on Iraq Buy?
NEW YORK The folks at The Boston Globe's got to wondering the other day: With the price tag for the Iraq soaring to at least $456 billion, what else could that money have been spent on?
But rather than just rattle out some numbers on ending malnutrition around the globe, they put together a photo gallery today to illustrate the idea. Some of what they pictured:
-- "Tagged as the most expensive high school in Massachusetts, at $154.6 million, Newton North High School could be replicated almost 3,000 times using the money spent on the war."
-- "At published rates for next year, $456 billion translates into 14.5 million free rides for a year at Harvard; 44 million at UMass."
-- "With just one-sixth of the U.S. money targeted for the Iraq war, you could convert all cars in America to run on ethanol. estimates that converting the 136,568,083 registered cars in the United States to ethanol (conversion kits at $500) would cost $68.2 billion."
-- "The Red Sox and Daisuke Matsuzaka agreed on a six-year, $52 million contract. The war cost could be enough to have Dice-K mania for another 52,615 years at this year's rate."
The final photo slot asked for readers' suggestions. Among the ones submitted:
-- Annex Venezuela
--Pay off everyone's student loans
-- "Crash program to install new nanosolar technologies (not the old-style panels..these are coatings) on every outdoor roof surface and sell back excess electricity back to the grid."
I think the obvious purchase is: Give the $$ we spend in Iraq to big oil to keep gasoline and heating fuel under $3 a gallon.
Anonymous said...
Does Pandemoniac post here?
Ivory Bill Woodpecker said...
Jimmy Carter made his biggest mistake when he chose to run in 1976. ANYONE who had been inaugurated in January 1977 would have had a rough ride. Too many bills, literal and otherwise, were coming due for our country. I fear that thanks to the Elephascists, an even bigger set of bills will be coming due for the Chimperor's successor(s).
WTF said...
The Myth that ronnie raygun was anything but a budget buster has been perpetuated by the idiot repugs, and why anyone believes this crap to this day is beyond me. Lying politicians are all over and on both sides. The big question is why do we have so many politicians and so few leaders? What is needed these days is some leadership not the constant flogging of a mediocre choice. The last presidential election gave us a choice between two skull and bone members, which only have about 600 in the world. WTF? Is this any way to choose our candidates? Plus how are these guys chosen anyway? By their bank accounts, how about putting a regular person in there, not someone bought and paid for by the corporate masters of this country. But, all of this bitching and moaning about the situation won't change anything, it has gone too far to ever come back in an orderly fashion this country and yes, the world is going to have to suffer some really bad times in the future. The current situation in various parts of the world can and will get worse, plus the US is going to suffer some really bad times itself. Prepare and be informed enough to try and protect yourselves, for what good that will do. Nope the game is over, we just haven't left the stadium yet.
capt said...
We're Number Two: Canada Has as Good or Better Health Care than the U.S.
Despite spending half what the U.S. does on health care, Canada doesn't appear to be any worse at looking after the health of its citizens
The relative merits of the U.S. versus Canadian health care systems are often cast in terms of anecdotes: whether it is American senior citizens driving into Canada in order to buy cheap prescription drugs or Canadians coming to the U.S. for surgery in order to avoid long wait times. Both systems are beset by ballooning costs and, especially with a presidential election on the horizon, calls for reform, but a recent study could put ammunition in the hands of people who believe it is time the U.S. ceased to be the only developed nation without universal health coverage.
Set aside the quality issue and just consider the simple fact that there is not 45 million uninsured in Canada.
Our system is making millionaires out of the educated while millions are forced out of a failed system.
capt said...
Beating global warming need not cost the earth: U.N.
It really doesn't matter the cost of saving ourselves from ourselves, the real cost is the result of doing nothing.
capt said...
"The big question is why do we have so many politicians and so few leaders?"
The real leaders can't make a competitive war-chest to buy a seat at the table. No honest person can rally millions from the people so the corporate interest has all the pull. We "the people" are just strung along with lies and failed efforts to challenge the status quo. That sucks.
capt said...
Rocky Anderson Obliterates Sean Hannity at University of Utah Debate on Impeachment
Last night's debate between Salt Lake City Mayor Rocky Anderson and FOX News propagandist and right-wing hack Sean Hannity was amazing. Anderson bombarded Hannity with a devastating indictment of all the President's lies and the senseless continuing occupation of Iraq, and all Sean can do was hide behind the troops and hurl juvenile insults, without ever once addressing the points Anderson raises. Here are the closing remarks.
Download (WMV)
Download (MOV)
If you want to check out the entire debate (and I highly recommend you do), you can view the Google Video here or you can stream it from the website (note: it only works with Internet Explorer). Rocky's opening goes 30 minutes, followed by 30 minutes from Sean, then there's a 15 minute period where they question eachother, followed by a 20 minute Q&A from the audience, then a 2 minute closing.
If there's one thing I took away from this (besides that Sean is a chump who can barely defend himself), it's that Hannity is NOTHING without his FOX News bully pulpit.
Maybe we can get Corn to debate Rocky? I think the "no impeach" position is weak and indefensible.
Gerald said...
Welcome to the Weekly Sunday Section by Gerald
Personal stress must be reduced to control the heart arrhythmia that increases the heartbeat to a dangerous level and placing my life in grave jeopardy. For this reason I need to reduce my postings and I need to develop a carefree attitude about Nazi America and her imminent demise.
My Weekly Sunday Section by Gerald will begin with certain blogs, blogspots, and websites. Plus, I may try to add some additional information on justice and peace.
The New Testament – John 31:34-35
Gerald said...
How Great Thou Art
Gerald said...
On Eagles Wings
Gerald said...
Praying Each Day
Gerald said...
Make It New
Gerald said...
Seven Beatitudes to Change the World
Gerald said...
Pax Christi USA
Gerald said...
One Day You're Gonna Wake Up
This article concludes Weekly Sunday Section by Gerald. God willing, I hope to be back next Sunday and share some ideas and thoughts with you.
Robert S said...
I just don't see it - Capt.
In my limited understanding of the subject, the question of the wave/particle aspects of electro-magnetic radiated energy (i.e., light) relates more to the examined measurement than to the light itself.
Excerpted from:
Quantum uncertainty
by Peter Landshoff
Bohr's version of quantum mechanics contained the first hint that electrons, although they are particles, are wave-like. This was made explicit in 1926 by the Austrian physicist Erwin Schrödinger, whose equation we still use today as the starting point of most calculations.
At the same time, Werner Heisenberg in Germany invented a formulation of quantum mechanics that seemed to be very different from Schrödinger's: it involved matrices rather than waves. Soon after, Paul Dirac, a Nobel prizewinner and occupier of the same professorship once held by Newton in Cambridge, showed that Heisenberg's theory could be cast into Schrödinger's form by a clever mathematical transformation. So people began to believe that they understood the mathematical structure of the theory, but its peculiar consequences continue to puzzle and fascinate us to this day.
Electrons as waves
Light has a dual nature: sometimes it seems particle-like and sometimes wave-like. But it turns out that this is also true of electrons and all other particles. If a beam of electrons is passed through a crystal, it is diffracted, a phenomenon usually associated with the wave-like behaviour of light.
When a fluorescent screen is placed behind the crystal, a diffraction pattern appears on the screen. The regularly spaced atoms in the crystal cause the diffraction. The pattern can be explained by associating with the electrons a wave of wavelength lambda, which changes with the momentum p of the electrons according to a relation discovered by the French physicist Louis de Broglie.
lambda = h/p
This is just the same relation as applies to photons, the "particles" of light. Indeed, quantum mechanics associates a wave with any type of particle, and the de Broglie relation is universal.
Figure 3: Electron microscope picture of a fly. The resolving power of an optical lens depends on the wavelength of the light used. An electron-microscope exploits the wave-like properties of particles to reveal details that would be impossible to see with visible light.
Source: LEO Electron Microscopy Ltd, Image of a fly.
Light waves or electromagnetic waves are streams of photons. The number of them is proportional to the intensity of the light.
Figure 4: In a diffraction experiment light is shone through a pair of slits onto a screen giving an interference pattern.
Figure 4: In a diffraction experiment light is shone through a pair of slits onto a screen giving an interference pattern. (See "Light's identity crisis" elsewhere in this issue).
It is possible to make the intensity so low that during a diffraction experiment only one photon arrives at the slits and passes through to the screen. (Similarly, we may allow just one electron to pass through the crystal.)
In these cases we cannot calculate with certainty the angle theta through which the particle is diffracted. However, if the experiment is repeated many times, we find a probability distribution for theta that has the same shape as the variation of intensity with theta in an experiment where there is a continuous beam of particles.
The Schrödinger equation
This suggests that the association of a quantum-mechanical wave with a photon, or with any other kind of particle, is somehow statistical. According to quantum theory one can never predict with certainty what the result of a particular experiment will be: the best that can be done is to calculate the probability that the experiment will have a given result, or one can calculate the average result if the experiment is repeated many times.
While in the case of photons the waves have a direct physical interpretation as electromagnetic field oscillations, for other particles they are much more abstract - purely mathematical constructs which are used to calculate probability distributions.
The "wave function" that describes an electron, say, varies with position r and time t and is usually written as follows:
It satisfies the differential equation which was first written down by Schrödinger. He could not prove that his equation is correct, though he was led to it through a plausibility argument from the various known facts about the wave nature of matter. The "proof" of the equation lies in its having been applied, successfully, to a very large number of physical problems.
It turns out that Psi has to have two components in order to describe physics successfully. It is complex-valued; the two components are its real and imaginary parts.
When Schrödinger's equation is solved for an electron in orbit round an atomic nucleus, it correctly leads to discrete energy levels. It is possible to do this calculation without understanding what the physical meaning of the wave function Psi is. Indeed, it was only some time later that Bohr suggested the correct physical interpretation: if a measurement is made of the position of the electron at time t, the probability that it is found to be within an infinitesimal neighbourhood of r, which mathematicians write as d3r, is:
|Psi(r,t)|^2 d^3r
This is the best information that quantum mechanics can give: if the measurement is repeated many times, a different result is obtained each time, and the only thing that can be predicted is the probability distribution. This basic indeterminacy has fascinated philosophers over the years, but most physicists have got used to it.
Robert S said...
Colbert: GOP Rep. believes drug slang is 'foreign language'
David Edwards and Nick Juliano
Published: Friday May 4, 2007
Stephen Colbert's "434-part" Congressional profile "Better Know a District" stopped just outside the nation's capital yesterday for an interview with Rep. Tom Davis, who represents northern Virginia.
Davis' district happens to be the home of Doobie Brothers drummer John "Tiny" Hartman, which prompted the following from Colbert:
"Are you familiar with what a doobie is," Colbert asked. "It's the same thing as a dutchie, ganja, spleef, chronic."
"That's a foreign language to me," Davis cut in. "We have 120 foreign languages in Fairfax County spoken in our schools."
"Oh, they're speaking this in your schools," Colbert quipped. "I guarantee."
The following video is from Comedy Central's Colbert Report.
I thought they were called "roofers." - David Frye parodying Richard Nixon
Which only goes to show that despite the quantum uncertainty inherit in universe, there are some properties which appear to remain constants...
capt said...
"I just don't see it - Capt"
I was trying to be funny.
No small wonder I am not a comedian.
Invisibility - I just don't see it - sounded funny at the time.
Robert S said...
It was/is funny...
My comment was targeted at the original article, which seemed to foster the idea that light was either wave or particle, rather than the notion that it displays aspects of both, dependent on human observation.
My comedic attempt - the cosmological constant of idiotic political speech, especially as concerns drug usage - apparently fared no better.
capt said...
"apparently fared no better"
Or I double reversed your reverse!
Tee hee!
capt said...
US is defeated in Iraq: Zawahiri
"The ones who have stirred up strife in Iraq are those who today are begging the Americans not to leave," said the white-turbaned Zawahiri, sitting next to bookshelves and an assault rifle.
Zawahiri mocked Bush for saying that a US-backed security plan for Baghdad was showing signs of success.
"The success is only for his pocket and Halliburton," he said, referring to the company once headed by Vice-President Dick Cheney.
Zawahiri also called on African-American soldiers to refuse to fight in Iraq and Afghanistan, saying America had changed only the "appearance of the shackles and chains" of their slave forefathers.
Zawahiri repeatedly praised Black Muslim leader Malcolm X on the video, which included footage of the American militant's speeches, interspersed with documentary scenes of police action against blacks in the 1960s and poor blacks in urban ghettos.
Zawahiri's last public comments were on March 11, when he criticised the leadership of the Palestinian Islamist group Hamas over its Saudi-brokered deal with the US-backed Palestinian faction Fatah.
capt said...
The Strangest Little Things in Nature
When small cannot get any smaller, you enter the quantum world of quarks, photons, and space-time foam. You're welcome to take a look at this indivisible side of nature, but just remember to leave your common sense at the door.
People as far back as the Greek philosopher Democritus believed that things were built up from irreducible pieces. Isaac Newton himself thought that light was not a wave, but rather a collection of tiny "corpuscules." Physicists have only recently acquired tools with sufficient resolution to see nature's inherent graininess.
Here's a quick tour of the quantum underbelly of the things around us.
Robert S said...
P. 15 -- In reviewing the troubling times the CIA went through in the 1990s, Tenet writes, "The Agency had also been rocked by false allegations in 1996 that some of its members had been complicit in selling crack cocaine to children in California. The allegations were ludicrous." This allegations was indeed unfounded. But Tenet fails to mention that the CIA's inspector general subsequently released two reports disclosing that the agency had worked with suspected drug traffickers when it supported the contras in Nicaragua in the 1980s. - David Corn, Tenent Abbreviated
An interesting editorial mistake. Did Mr. Corn mean to say "these allegations" or "this allegation"?
This is not, in context, a trivial distinction.
The CIA drug connection is so multifaceted as to be looking through a prism at an apparition.
The radio network which took as its name Air America knew well the tongue-in-cheek implications of its moniker.
Eugene Hasenfus was a real live person who cannot be wished away.
Gary Webb’s “Dark Alliance” Returns to the Internet
By Dan Feder
Special to The Narco News Bulletin
June 23, 2005
Gary Webb later committed suicide. Mr. Corn does no service to his memory by his glib statement. What exactly does Mr. Corn believe the CIA - Drug connection to be?
See also, Robert Parry, Alfred McCoy, MK ULTRA, etc., etc.
capt said...
In all fairness I read Gary Webb back then and I did not believe the CIA connection until the DCI (dep. DCI) admitted as much.
Then I was FLOORED.
capt said...
Rush Limbaugh Mixes Science with Comedy
Rush Limbaugh, the conservative talks show host, proves that science can be twisted to support any viewpoint. He found LiveScience’s Top 10 Ways to Destroy Earth and, after reading much of the presentation (hurl our planet into a black hole, blow it up with antimatter, and other pretty difficult schemes), rightly concludes that it’s virtually impossible for us to annihilate this world. He goes on to say that this is reason enough to go ahead driving your SUV and running your air conditioner, because you can’t destroy the planet by your actions.
That’s funny. And one assumes Limbaugh knows it is just humor. The Top 10 Ways to Destroy Earth, by the creative ponderer Sam Hughes, lays out incredibly difficult but theoretically plausible ways to render Earth entirely gone, as in no longer here. Dust, vapor, food for a blackhole.
People who worry about global warming and the effect humans have on climate are, however, not arguing that we will obliterate the planet, but rather simply that we are contributing to a dangerous trend that will cause seas to rise and swamp coastal communities, might render many species of animals extinct, and that could generate a host of other ill effects for society and life as we know it.
Limbaugh can make you laugh, but encouraging gas guzzling just because it won’t literally destroy the planet is a sad recommendation even from someone who doesn’t worry about global warming. Should we not also be concerned about American dependence on foreign oil, the limited supply of oil, and our eroding ability to compete effectively in the global marketplace as the cost of oil skyrockets while we fail to robustly encourage investment in new technologies and sustainable energy sources? Those seem like reasonable concerns for a conservative, but perhaps I’ve just twisted the science to fit my views.
Well done. If "Global Warming" is just a way of branding conservation and concern for the overall health of the planet I can't see how that is a bad thing in any way. Maybe some of the SUV drivers can learn there is a cost associated with using too much oil even if they have the money to run at 3 miles per gallon. Maybe people can learn there are costs above and beyond their electric bill for running even one light bulb. (no matter what kind)
Robert S said...
I would rather see a political landscape which included a real economic alternative to T.I.N.A. which is denied by the Democratic v. Republican duopoly. That said, under the present circumstances, a mixed government is better than a GOPher monopoly, a Democratic monopoly would be better still, and best would be where the Dems were seriously challenged from the left.
IMHO, of course. I am known to reject property rights uber alles.
capt said...
"Democratic monopoly would be better still"
THAT is where we part ways. A truly progressive or dare I say truly liberal monopoly would be better still. Neither of which is actually represented by the Democratic party, not anymore.
IMHO, but I tend to hate all parties and in large measure.
Robert S said...
It doesn't strike me as much of a parting of the ways; inasmuch as I agree that the Dems have largely turned there backs on their natural constituency and embraced the monied interests. They have only done so somewhat less enthusiastically than the GOPhers.
So, my position here is that the Rethuglicans are at least honest enough to say outright that their constituency is the "haves and have mores," and that their position is let the rich keep more and they will piss enough "trickle down" upon the rest us to keep us if not happy, then quiet, and if not quiet, then at least subdued or incarcerated. There really isn't much of an alternative within that group to justify them as being a viable choice, simply as an opposition to the Democratic Party.
The Democratic Party, at least in the wake of the New Deal, was an effort to make Capitalism palatable to the suffering masses; in the dust bowl, in the mines, in the factories, on the picket lines...
It was, after all, the Socialists who first proposed Social Security, unemployment insurance, minimum wages, etc.
It isn't that I want monopoly power in one party, it is rather I'd like to see the the spectrum shift from Center-Right to Fascist, to Socialist to Center-Right...
Robert S said...
The evidence is in! Government service leads to memory loss.
In a related survey, few of the government officers/employees who have complained of memory loss remember any consumption of cannabis products...
capt said...
I think - selective memory loss. They seem to forget every crime and potential criminal act but they cannot forget a single small success or popular position.
I think there is something in the air in DC.
Robert S said...
Iraq situation may worsen.
Something in the air in D.C.?
Is it a bird? Is it a plane? Is it Phase II of the Senate Intel Committee?
Bombs Kill 8 U.S. Troops in Iraq
Associated Press Writer
Is it the smell of death and destruction? Or the first whiff of Impeachment?
capt said...
How far you can go without destroying from within what you are trying to defend from without?: Dwight D. Eisenhower
Listen to Rev King in this historic anti-war speech:
Thanks ICH Newsletter!
capt said...
Surging Into Slaughter: The Bipartisan Death Grip on Iraq
Intro: It is becoming increasingly clear that regardless of who wins the election in 2008, the United States government is not going to withdraw from Iraq. It is just not going to happen. This is the awful, gut-wrenching, frightening truth we must face. The only way that American forces will ever leave Iraq is the same way they left Vietnam: at gunpoint, forced into a precipitous and catastrophic retreat. And how many thousands upon thousands of needless deaths we will see before that terrible denouement?
While Congressional leaders and George W. Bush start "negotiations" on ways to prolong the war crime in Iraq for another year or two (at least), on the ground in Baghdad, the situation is worsening by the day
capt said...
When will American people be told the truth about Iraq?
Now that President Bush and the Democrats have taken turns grandstanding over his veto of their troop withdrawal bill, it's time for a bipartisan burst of honesty.
Instead of haggling for political advantage, Bush and members of Congress should both confess that they have not been straight about the future in Iraq.
The president's promise to "complete the mission" is a triumph of a tired slogan over reality, just as the Dems' pledge to "end the war" is riddled with loopholes. It's time to cut the bull and be realistic about where we're going.
Start with Bush. While he blasted Dems again last Tuesday for demanding the start of troop withdrawal by Oct. 1 as a recipe for chaos, he has quietly accepted a de-facto deadline set by his own commander that is not much different.
Gen. David Petraeus said last week that he would decide in September whether the surge of added troops was working. Implicit in the commitment, which includes a public report to Congress, is that a lack of progress would doom the plan.
While it's not clear what Plan B is, it is certain the surge must pay dividends to continue past the fall.
Robert S said...
Marine major, Owen West, who has served two tours in Iraq, predicted that the 75,000 would be in Iraq at least until the fall of 2008.
That is when Americans will elect our next president. Surely by then, somebody will be forced to tell us the truth about Iraq.
Surely, by then. Right. Just as we have been given the truth about:
Oh, hell, chose your scandal...
Ivory Bill Woodpecker said...
Before anyone starts griping about the lack of Democratic courage, I want to remind everyone of what happened to the Democrats the last time they DID go out on a limb.
Remember the Civil Rights Acts that ended apartheid in the USA? The Democrats, and some honorable Republicans [yes, once there were honorable Republicans] did the right thing instead of the politically expedient thing.
After that, the DIShonorable faction of the GOP--which today IS the GOP--promptly turned to exploiting the racial bigotry of white Americans to become the dominant party for, oh, some 40 years now.
So even if the Democrats actually do have the power to cut the Chimperor and Darth Cheney off at the knees and bring the troops home, I don't blame them for lacking the courage to do so.
The Democrats know that the GOP Noise Machine would promptly go into "stab-in-the-back" mode. Hell, they're doing that already, since they know the war is lost. The Dems also know, from years of bitter experience, that my fellow white Americans are stupid enough and morally depraved enough to embrace such scapegoating eagerly.
The voting patterns of white Americans since the Civil Rights acts are a grim monument to Mencken's dictum: "Democracy is the theory that the common people know what they want, and deserve to get it, good and hard."
Since my fellow palefaces' Pwecious Widdle Egos won't let them admit that they F**KED UP SUPREMELY, they will readily embrace the GOP lies yet again.
My own people sicken me. It shames me to admit I belong to them.
So I don't blame the Democrats. I blame my fellow white Americans. In the secrecy of the voting booth, they continue to bend the knee to the idol of racism, as they have done for decades. They deserve the government that they have now. Selah.
From the Arkanshire, IBW
capt said...
Boehner acknowledges GOP nervous on Iraq
Another Democratic presidential candidate, former Sen.
"I think that America has asked the Democratic leadership in the Congress to stand firm, and that's exactly what I'm saying they should do," he said.
"With all due respect, we could have used John's vote here in the Senate on these issues here," Dodd said.
Dodd and Boehner appeared on "Fox News Sunday," while Edwards was on "This Week" on ABC. Rangel spoke on "Face the Nation" on CBS while Lugar and Schumer were on "Late Edition" on CNN.
Robert S said...
Toby Keith: Working Class Hero, or Rich Asshole?
by Jaime O'Neill | May 7 2007 - 9:15am
Robert S said...
The lethal media silence on Kent State's smoking guns
by Bob Fitrakis & Harvey Wasserman
It is difficult to overstate the political and cultural impact of the killing of the four Kent State students and wounding of nine more on May 4, 1970. The nation's campuses were on fire over Richard Nixon's illegal invasion of Cambodia. Scores of universities were ripped apart by mass demonstrations and student strikes. The ROTC building at Kent burned down. The vast majority of American college campuses were closed in the aftermath, either by student strikes or official edicts.
Can't be havin' none of those uppity protesters gettin' outta hand...
Free speech zones? Remember the
Florida FTAA protests?
How about the recent MacArthur Park May Day incident?
And speaking of Douglas MacArthur, he led such heroes as Patton and Eisenhower in attacks on US WWI veterans in Washington D.C. in 1934 - the Bonus Army incident.
How can you run when you know?
Robert S said...
1932, not 1934, sorry.
capt said...
"President Bush has ordered White House staff to attend mandatory briefings beginning next week on ethical behavior and the handling of classified material after the indictment last week of a senior administration official in the CIA leak probe. … A senior aide said Bush decided to mandate the ethics course during private meetings last weekend with Chief of Staff Andrew H. Card Jr. and counsel Harriet Miers. Miers's office will conduct the ethics briefings."
- "Bush Orders Staff to Attend Ethics Briefings: White House Counsel to Give 'Refresher' Course," Washington Post, Nov. 5, 2005
Was Miers training Rove how TO destroy emails? The "mandatory briefings" should preclude the "I didn't know better" excuse or am I missing something? Why does the MSM just forget these things?
Robert S said...
Why does the MSM just forget these things? - Capt.
It is awfully convenient for the corporate masters that control the marionettes for them to do so...
Robert S said...
GOP Convention Papers Ordered Opened
By Larry Neumeister The Associated Press
Friday 04 May 2007
New York - The city cannot prevent the public from seeing documents describing intelligence that police gathered to help them create policies for arrests at the 2004 Republican National Convention, a judge said Friday.
U.S. Magistrate Judge James C. Francis IV made the ruling regarding documents about information the New York Police Department says it used.
The city had contended that the documents should remain confidential, saying opening them would jeopardize the city's rights to a fair trial. Lawsuits allege that the city violated constitutional rights when it arrested more than 1,800 people at the convention.
The judge stayed his ruling for 10 days. Peter Farrell, a city lawyer, said the city is considering an appeal.
"The decision is a vindication for the public's right to know and a total rebuff of the Police Department's effort to hide behind the cloak of secrecy when it comes to its surveillance activities," said Donna Lieberman, executive director of the New York Civil Liberties Union, which sued on behalf of some of those arrested.
The convention was policed by as many as 10,000 officers from the 36,500-member department, the nation's largest. They were assigned to protect the city from terrorism threats and to cope with tens of thousands of demonstrators.
The NYPD were using preemptive arrest tactics, and holding people, illegally, in squalid conditions in an abandoned warehouse on the Hudson River, for far longer than would have been necessary if they had been processing them in an orderly fashion.
The POLICE are the property militia of the wealthy. And historically, the goons of the bosses.
capt said...
New Thread |
6e2374881d472082 | Thursday, August 31, 2006
On Irish Election
So from tomorrow if you have an illegally held firearm you can hand it into your local Garda station and not get arrested. This is as part of the governments effort to reduce crime. But in fairness it is a waste of time.
In Ottawa Canada they ran quiet a successful one where 506 firearms were handed in. So it sounds like a good idea but one of the key quotes from the police chief of Ottawa Vince Bevan. Was that “Our intent was to increase community safety, by reducing the potential of unwanted firearms getting into the hands of those who may use them to carry out criminal acts,” See that is the difference with a gun amnesty in Canada and one in Ireland. In Canada guns are quiet common in Ireland they are all ready in the hands of those who may use them to carry out criminal acts. A gun amnesty is going to do little to curb the rise of gun crime.
Continue reading ‘McDowell you used to be Cool.’
Update:McDowell you are still cool
TV Hell
A poll compiled by Informa Telecoms and Media looking at the viewing habits of 20 countries has found that CSI:Miami is the most popular show in the world. We can confidently expect the seas to start boiling and for it to rain blood anytime soon. The top 10 is as follows: 1. CSI Miame 2. Lost 3. Desperate Housewives 4. Te Voy a Ensenar a Querer (I will teach you to love) 5. The Simpsons 6. CSI: Crime Scene Investigation 7. Without a Trace 8. Innocente de Ti (Innocent of you) 9. Anita, No Te Rajes! (Anita Don't Give Up) 10. The Adventures of Jimmy Neutron: Boy Genius
Wednesday, August 30, 2006
Morning News
I love the 10 commandments thing.
I am going to start a book.
I have had this book sitting in the corner of the room for the last year and have yet to touch. I bought it thinking it might be a good read but when I got it and its 1090 pages. I decided it will stay in the corner. But I have decided to pull it out and start it. It is by Roger Penrose and is called the Road to Reality A complete guide to the Universe. And has such delightful sections as 26.3 infinite-dimensional algebras and 17.2 Spacetime for Galilean relativity. Wish me luck.
The Life
Nice pic by one of my most read blogs Branedy . That is what I like about Ireland. Sitting outside of a pub watching the world go by. If only Kinsale was not full of Cork people. :)
Don't Go
Paige is threatening to quit as
"I hope that you will excuse my discontinuing as a blogger. I genuinely donÂ’t feel worthy of the description owing to my lack of original talent and the selfish (childish) reason that I started blogging."
Which is funny as the post she wrote to say it. Is very good and original. She explains why she started blogging which is a lot more interesting then my story of a misaligned Chinesee laser cavity. Which I know sounds like a cool story but isn't trust me. It was misaligned so I couldn't do anything so I had time to doss. Hence the name Dossing Times. Anyway Paige keep up the work.
More Emails from Republicans
Got this in the email.
Dear NRCC supporter,
Act now!
As Executive Director of the National Republican Congressional Committee, I have not personally seen a more grave and dire situation for Republicans. That is why I need your immediate attention.
There are very few “safe” seats in Congress this election year. The Democratic Congressional Campaign Committee has mobilized itself and is providing unprecedented support to candidates. The Democrats want to win the majority and they will do everything to achieve their goal.
A few weeks ago, our political director, Mike McElwain, informed you that the Democratic Congressional Campaign Committee had made a $50 million dollar media buy. We knew the Democrats were spending huge amounts of money to promote their ultra-liberal candidates and destroy the reputations of Republicans with false information and lies, but this is worse than I expected. The media buy combined with the support from liberal 527’s makes for a destructive Democratic force.
That is why I need your immediate help.
We must match the Democrats’ media buy and get Republicans on TV. We must increase our grassroots support. And we must be proactive in helping our targeted candidates communicate with the American public, particularly because the liberal media is drowning out the conservative message.
We cannot wait another day to act.
There are ten short weeks until Election Day and the Democrats have surpassed us in countless ways. As Republicans, we know how to mobilize volunteers and voters and how to help candidates with whatever they might need. We now must act on our knowledge, but we can only be effective with your help.
Please go to this secure link and make your most generous contribution of $35, $50 or even $200, by September 1st. We have to start September strong and put your generous contribution to work right now for Republican candidates.
There is no time to wait. We must show the Democrats that our party is stronger and more united than ever in the battle for Congress. Please make your most generous contributions at this link by the end of the week. I appreciate your help and quick action.
It is gas really "beware the ultra-liberals" like they are the bogey man. I am really begining to think that the war on terror is not a war on terror but a war on Democrats. Which is a pity as the war on terror is in principle not a bad thing. If there are any Americans reading this. Vote for a third party canidate.
Tuesday, August 29, 2006
They just don't make them like this anymore.
Classic Animaniacs. The Anvilania one. Enjoy.
. Those were the good old days.
Monday, August 28, 2006
Sunday, August 27, 2006
Nasrallah honesty
From Naharnet.
Hizbullah leader Sayyed Hassan Nasrallah said Sunday that if he had known the capture of two Israeli soldiers would lead to such a war, he wouldn't have ordered it.
*cough* Bullshit *cough*
Hat tip (Macdara)
Thin Women
Here is an excellent article in the Observer about the new trend of uber thinness on women and why
The shape we're in Why women are to blame for our obsession with being thin.
It really is a bit perplexing for us men because as the article says
It's nothing to do with men (heaven knows, few men actually fancy the perilously thin females glorified by women; most would swap five Posh Spices for a Jennifer Lopez any day), and everything to do with competition between females.
Humbling of the supertroops shatters Israeli army morale
Find a link here for an article in The Sunday Times taking an in-depth look at the tactics used in the conflict between Hizbollah and Israel and the unease created by Israel's logistical and intel failures.
Recent posts of mine on IrishElection
Oh Dear Scare-tactics media again
From the Sunday Times
IRISH people are more likely to be victims of crime than any other country in the EU — according to a new European commission study.
Now that is an indictment of the government. We really have the worst crime rate in the EU. Oh wait
the report, which is based on citizens’ experience and perceptions rather than government or police figures,
So basically we don’t have the worst crime we just think we do. So who’s fault is that then? Perhaps the media for always saying crime is high.
Continue reading ‘Oh Dear scare-tactics media again.’
Educational Disadvantage.
Looking through the local papers for the local paper section I was struck by the amount of references to the cost of going back to school. Education is to me the most important thing for a country to work on and should be the most important priority for any party. So when I see parties more interested in giving corporate welfare to Boeing then insuring that every kid gets a decent education I am not a happy bunny.
Continue reading ‘Educational Disadvantage’
Migrant Workers
The Irish Independent reports on the Government’s plans to introduce work permits for the 2 new accession states Romania and Bulgaria.
Saturday, August 26, 2006
For any of you who don't read check out the cartoons . Thanks to the bitsniff guys.
Friday, August 25, 2006
Eating Out.
Richard talks today about eating out. Eating out is one thing I do not like to do. I just hate the entire restaurant setting. With the ambience painted on the wall. The mock sense of intimacy with your conversation being over heard by all on sundry. Which makes you feel that you are a guest in the place where the conversation will only veer in the direction of quality of the broccoli and "ooh I like that light fixture" .
As for the etiquet thing it is all shite people trying to create utopian ideal of civility a neo-Austen world of tea and manners. A total charade for people to pretend that they are civil when they are not. Civility comes from within not by using the correct fork or knowing the correct wine glass to use with red.
But the worse aspect for me has to be being served. I feel incredibly uncomfortable being served by someone, as if I am better then them deserving service. I am just like them yet somehow I have been elevated to a level above them (for a right winger I am awfully communist). This just makes me incredible put off and I can never enjoy the meal.
Also I enjoy cooking and like seeing my friends enjoy my various concoctions. And to be honest I get an ego trip when I hear comments like “man that’s fucking lovely” or “that is a sexual cheesecake”. It really is the best part of the whole experience why would I pay someone else to do that part?
I rarely ever eat out and when I do it is usually not on my insistence. However there is one eating out experience that I always observe. Curry Cheese chips after a feed of Guinness. Which I enjoy in the fresh air with a plastic fork with friends where i can talk about any topic I want in the ambience of the night.
Thursday, August 24, 2006
Official: Many rhymes now obsolete
Astronomers have decided that Pluto is not in fact a planet, reducing the number of planets in the solar system to 8 and rendering several "sequence of planet" rhymes obsolete. Oh yes, and thousands of science and school books are now out of date.
The scientist have agreed that for an object to be classified a planet it must satisfy the following conditions:
• It must be in orbit around the Sun
• it has cleared its orbit of other objects
Although Pluto satisfies the first two conditions it fails the third as its eliptical orbit overlaps with that of Neptune and hence it hasn't cleared its orbit of other objects. Thus it is now classed as a "dwarf planet"
This is just another example of the small guy getting stepped on! It's blatantly discriminatory. Why hasn't Neptune been declassified under these new rules? After all if Plutos orbit overlaps Neptune then, if we get technical, surely both have failed to clear their orbits of other objects?
Damn elitist scientists!
Tuesday, August 22, 2006
Calling all Search engine experts
I am trying to improve 's Google presences which is pretty poor. If anyone could give me a few tips on how to advance it I would be appreciative cheers. You can email me on thedossingtimes at Thanks again
The EU's plan for Iran and Iraq, Conquer it
Just saw this on a pro EU Constitution website. Thought it was funny.
Gay Mitchell still not in the papers.
Still no word in the newspapers about Gay Mitchell's speech. Seriously what is the deal with this. Tom Parlon rigs a local radio txt poll and it gets covered on Budget day. Gay Mitchell talks about bring back the Monacary during the silly season. And what do we get. Nothing. Is it just me or is that strange. I know I sound like a broken record. But still strange.
Monday, August 21, 2006
Dave Chapelle
If you don't know the guy you should. Here is 3 choice sketchs from him. Samual Jackson beer.
And Just watch this one. Trust me
And to finish off. Black Bush
Fine Gael the Monarchist party in the papers or not.
Is it just me are it strange that Irish Examineer Irish Times and Irish Independent do not seem to cover Gay Mitchell's speech. Does it not seem like a story to everyone else.?
Sunday, August 20, 2006
Fine Gael the Monarchist party?
From Irish Election
I am a republican in much the same ways as I think the vast vast majority of people in Ireland are. i.e. Everyone is created equal and no body by virtue of their birth should be considered better then someone else. You would think most Irish people would agree with that however it seems that Fine Gael are hinting that they would like to see Queen Elizabeth retake the Irish throne.
Saturday, August 19, 2006
Shocking Movie Moments
The critics of the Sunday Times have compiled a list of the 50 most shocking moments in film, categorised under sex, drugs, violence and religion.
The list seems comprehensive but I would unapolegetically add the admittedly predictable 'Psycho'. During some formative years I was lucky enough to watch Psycho without knowing the identity of the killer beforehand and the reveal unnerves me to this day.
The infamy of the shower scene distracts far too much from a resoundingly tense, suspenseful film. Earlier scenes where Marion Crane is stopped at traffic lights and her sisters walking up to the Bate's house and her shocking discovery towards the end literally began my interest in film.
Do you feel there are any omissions to the list?
Friday, August 18, 2006
Science In Ireland
Posted on Irish
As with every year the discussion of the Leaving Cert results centres around 2 things. 1 the gender gap which I will leave to someone else and 2 the decline in the sciences. So what can we do to increase the amount of people taking sciences in the secondary school and at third level. One of the big factors I think in the level of interest in sciences in Ireland is Arts Students.
Continue reading ‘Science In Ireland’
When you know Reuters didn't ask an expert
This is relation to the free energy thing. I have to say I bust my self laughing when I read this. From Reuters
The concept of "free energy" -- which contradicts the first law of thermodynamics that in layman's terms states you cannot get more energy out than you put in -- has divided the scientific community for centuries.
If anything in the world could be said to bring the Scientific community together it is the first law of thermodynamics. Divided the scientific community my arse.
Final Fantasy Physics
From Irish Times
A small Dublin-based company is seeking 12 scientists to verify its claims that it has developed technology that produces "free energy".
To quote Homor Simpson. We obey the laws of thermodynamics in this house. It is utter fantasy. The companies website is here. I am waiting for people to go on about big oil. Why create a panal of 12 scientist? Why have they not submitted for peer review in a scientific journal where all scientists can see their work? Just all smells of Shenanigans to me.
Update: Just thought of something. They published in the Economist instead of lets say Nature or even NewScientist. What is the average Economist reader, someone in Business what is the average reader not, a physicst. So if you were looking for scientist where would you advertise. Nature or the Economist. If you were looking for venture capital where would you advertise. Let me think. This stinks of Shenanigans
Wednesday, August 16, 2006
New research shows 12 planets in Solar System
The world's astronomers and the International Astronomical Union have concluded two years of research that shows there may be 12 planets in our solar system. The work defined the difference between planets and the smaller solar system bodies, such as comets and astroids. If the definition is approved by the astronomers who gather at the IAU General Assembly in Prague this month, the solar system will include 12 planets, with more to come, according to the IAU.
The 12 recognised planets are: Mercury, Venus, Earth, Mars, Ceres, Jupiter, Saturn, Uranus, Neptune, Pluto, Charon and 2003 UB313 (provisional name). There are eight "classical planets": Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, and Neptune. Ceres is a planet, but because it is smaller than Mercury, one may describe it as a "dwarf planet". A new category of planet is now defined: "plutons". Pluto (the prototype for this classification), Charon, and 2003 UB313 fall into the growing category of planets called "plutons".
The West must rebuild Lebanon .
The West must rebuild Lebanon because not only will the middle east lose possibly the only likely democracy outside of Israel but also we must stop Hezbollah, Iran and Syria gaining the power there. They are already ready to spend. From NY times
We must rebuild them stronger so to make sure the extremists fail to take over. The West needs a Marshall Plan for the Middle East.
Check out the new Irish Election Podcast. The sound is not great but we are new at this lark it will get better. But check it out.
Tuesday, August 15, 2006
Music and the national mood
The enduring memory of the 80s for much of the world is mullets, pop socks live aid and fame But in Ireland the memory is queues. Queues outside the American embassy for visas, queues for the bus to London, queues at the airport for the flights to Boston. Ireland in 1980s was dying. Money was tight and people struggled for every penny. Young people had no option but to leave, RTE had shows like Giz a job to teach the 20% of the population without a job, usually how to find a job. Ireland was nearly bankrupt. We were the disappearing Irish, many predicted that Ireland would be no more in a decade or 2 as everyone would have left to look for work. A joke at the time was the last person out would have to remember to turn off the lights. It was a miserable time. .
Our music always reflected this like “Missing you” by Christy Moore In Christy Moors song the chorus had the line “I give all for the price of a flight” the desire to return was all ways a strong message in the music. Shane McGowens tearful poetry about the Irish in Foreign shores, Fairytale of New York, Rainy day in Soho, Misty morning Albert Bridge. Songs about alcohol, depression and America. That music was of its time.
But now our country is booming. Instead of emigration we have immigration. As we are in dire need of the labour. Irish is no longer the second language Polish is. We are no longer an economic case study about how not to run a country we are now a case study about how to run one.
The music now is also of its time. We have the Celtic tiger cubs singing about why a girl does not like them and how they can spend their SSIA’s. We have come a nation that seems to have rebelled utterly from the sheer horror that was the 80s. Spending our money like there is no tomorrow thinking that if we stop the dream will end and we will no longer have Damian Rice whining on about how he can’t take his eyes off someone our music will be like the Pogues and Christy Moore of it’s time and truly tragic.
Whatever Richard thinks of the melodic properties of traditional music compared to Classic. There is two things that classical cannot do as well. Poetry
(you can see a poor quality version of Rainy night in Soho here)and capturing the time.
Monday, August 14, 2006
New blogger Iranian President Mahmoud Ahmadinejad
Sunday, August 13, 2006
Kofi Annan to Hizbollah's Rescue?
The link below is to an article in the Jerusalem Post from last week, delivering a scathing attack on the UN and its resolution just passed. It seems Israel wants the protection of the UN and the enforcement of resolutions against other nations but this of course does not apply to Israel who actively violate resolutions allowing the right of self-determination and right of return to 2 million people, engage in collective punishment and continue to build a 'security wall', each of which the UN have explicitly condemned and forbidden through resolutions. Article Here
Posts and stuff
and don't forget you can't get Humas withouot mashing some chick pea's
Saturday, August 12, 2006
Public transport is shite
Took this picture in Limerick the other day. Ever feel that buses never come on time and when they do they come in pairs. Well they do. Now both those buses should be taking different routes into the city centre (304 and 304A) yet only one bus stops. Reliable eh. The only thing I can conclude is they all take their tea break at the same time. I have written before on Public transport. and Privatising the Bus Eireann Click to enlarge picture.
Friday, August 11, 2006
Irish Mime dancing
Just Wierd. Mimes fighting to the Dubliners And Pirates of the Caribbean cereal
Thursday, August 10, 2006
Terror effects.
It was one of Irelands greatest inventions, it offer the fodder for Christy Moore songs and it was the subject of a long battle with the EU. In 1947 Shannon airport opened the worlds first Duty Free shop. A real stroke of genius that is now the norm across the world. Everyone coming back from abroad has to come back with plentiful supplies of fags and booze for the people at home. Even though the EU banned duty free shops between member states the Travel Value shops still were busy. But in one day today all this is gone.
Due to the terror alerts people are forbidden from bringing any materials onto the planes everything has be checked in. Now as duty free is passed the check in area this means that unless people can have a separate Duty Free check in. They will be unable to buy anything in Duty free, other then food that they consume prior to getting on the plane. Sales of Whiskey, Cigarettes, electronics will all be gone.
How many jobs will lost due to this is still unclear but could be many. Will the airports install checks prior to duty free? This attack may have more effect on aviation then 9/11.
Also I heard a guy talking on Morning Ireland talking about his start-up business creating systems to allow people to make mobile calls on aircraft. I guess this business is dead now. The terrorist have certainly won something here, as they have made us change.
David Cameron Saves the day
Got this email today ( I get a lot of weird emails) in relation to the recent terror alert in London.
The latest terror alert comes at a time of massive momentum for the 9/11 truth movement worlwide... Last week C-Span US coast to coast TV network showed the panel discussion from the American Scholars Symposium in L.A. not once, but four times in four days. 20 million people in the US saw each screening. You can download it here:;amp;vid=0&epi=0&typ=0 This follows from the new Scripts Howard Ohio University poll which last week found that 36 percent of US respondents overall said it is "very likely" or "somewhat likely" that federal officials either participated in the attacks on the World Trade Center and the Pentagon or took no action to stop them "because they wanted the United States to go to war in the Middle East." Today's terror plot must be viewed as an attempt to divert attention from Tony Blair's troubles at home where 150 Labour MPs are lining up in protest against his support for Israel and to act to reinforce the Al Queda myth among those who still believe or are sitting on the fence. We live in interesting times. ----- Terror Plot: How Long Before It Turns Into BS Like Every Other Example? Prison Planet | August 10 2006 Government enforcers and frightened slaves are all hot and bothered about the latest supposed terror plot targeting UK flights inbound to the US. How long before the whole saga turns out to be hoaxed BS like EVERY SINGLE OTHER major terror alert there has been? Picture (Metafile) Ridiculous restrictions have been slapped on travelers, with mother's having to taste baby milk before they board planes and all hand luggage, including liquid drinks, being banned.
The new alert arrives with the 9/11 truth movement on the cusp of a wave of media exposure.
. Evidence of government sponsored terror and how they use the fear of terror to control society is bursting out at the seams as editorials nationwide in the US are uniform in attempting to debunk research that questions the official version of 9/11.
Statistical analysis has proven that every time Blair and Bush sag in approval ratings, a fresh terror alert gives them a bounce back up the charts.
Every single major terror alert issued by either the US, Canadian or UK governments has proven to be either a manufactured facade, an entrapment sting or an outright hoax.
Recently, a supposed plan to hijack planes and fly them into London landmarks was exposed as a concoction of UK government lobbyists and news chiefs.
The July 2005 London bombings were a British intelligence operation. The alleged ringleader, Mohammed Siddique Khan, was working for MI5.
Here is a compendium list of other reports where the role of governments and security agencies in manufacturing artificial terror plots is exposed - within these stories are links to even more. ALSO SEE: FAKE TERROR ALERTS ARCHIVE
So basically this plot was an attempt to shore up Tony Blair’s support and the guys blew themselves up in the under ground in London. Did so not to fight the great Satan or anything else. No they willingly sacrificed themselves for Tony’s poll ratings. What dedication to the cause. Bertie must be looking at disgust at his party wondering why is their no one willing to blow up the Laus for his poll ratings.
But if this “Statistical Analysis” is right. Does that mean that the plot was stopped by a crack squad of Tory MP’s to prevent Labour by-passing them in the polls. Makes sense doesn't it. David Cameron must be a real life Bond. I wonder does his bike have missiles in the lamps.
On a serious note. Where would these aircraft have blown up. Think about the flight paths most of them to America fly over Dublin and Limerick.
Terrorist plot foiled
America has also raised its threat level to the highest point
Wednesday, August 09, 2006
Ireland vs New Zealand 2005
Great the stuff you find on google video.
Funnily Enough
Got this email today no idea why. Also considering I can't vote in the states it is entirely useless. Still thought I would share. Also despite what many of you probably would think I wouldn't vote Republican in America. To conservative for me. But if they went back to their small government, personnel freedoms ways and supported gun control and getting rid of the death penalty I would probably vote them in.
Dear Supporter,
I need your help to send Congressman Charlie Rangel packing back to New York!
Last week, Representative Charlie Rangel from the 15th district of New York said that he will quit Congress if Democrats do not win the majority and take advantage of the current political climate. Congressman Rangel implied that because the nation is leaning away from the Republican party, Democrats must take advantage of this opportunity and win the majority in the House.
I need your help right now to prove to Congressman Rangel that Americans are not deserting the Republican party and that we are continuing to fight for control of the House. Your support at this link will send Charles Rangel packing back to New York and stop the Democrats from taking over the House!
If Charlie Rangel does not want to be a Member of Congress with a Republican majority, then he shouldn't be in Congress at all. I personally do not want to serve in a Democrat-led Congress, but I will not abandon my duties to western New Yorkers if Republicans lose the majority. As elected officials, Congressmen and women must represent our constituents, not leave them when things get ugly in Washington.
I continue to be deeply concerned about this fall's elections and losing control of the House. The Democrats have never before been able to raise as much money and organize the grassroots support they have this year. It is unclear whether they will be able to truly mobilize voters, but we cannot wait and hope they do not get organized; there is little time left until Election Day.
Howard Dean, Nancy Pelosi, DCCC Chairman Rahm Emanuel and national Democrats are counting on Americans to continue to leave the Republican party and vote Democrat on November 7th. We have to prove them wrong.
Let's send the House Democrats packing!
Thank you,
Tom Reynolds
Tom Reynolds, M.C. Chairman
P.S. Forward this message to your friends and ask them to join you in sending Charlie Rangel and House Democrats out of Congress and out of Washington, DC!
Recent post on Irish Election. Time to curtail the TV Tax
Time to curtail the TV Tax
Tuesday, August 08, 2006
Winter Movie Preview
I plan to revisit the movies I advocated you go see this summer in a later post but in anticipation of that I first offer you a selection of movies that look promising for the Autumn and Winter to come. I know now from compiling my list of summer movies that release dates vary too much to put any order to the list, chronological or otherwise but I will go as far as offering that they will be released in what remains of 2006. I also include some movies to avoid.
1. 'Talladega Nights: The Ballad of Ricky Bobby'
In a plot line remniscient of 'Cars', Will Ferrell and 'Anchorman' director Adam McKay reunite for a story of Ferrell playing a stuck-up racing star who learns the value of humility after hitting the skids in a big way. It's good to have a break from the usual brat pack suspects filling out the supporting characters, instead we get the likes of Gary Cole, Sacha Baron Cohen, Michael Clarke Duncan, Leslie Bibb, and John C. Reilly. The buzz is good and at the time of writing 'Talladega Nights' has just toppled 'Miami Vice' from the top spot in the States.
2. 'Snakes on a Plane'
This movie will never meet the internet fueled anticipation surrounding it but for the title alone it deserves a place in any discussion of movies of 2006. The premise is simple: Bad guys have unleashed dozens of deadly snakes aboard on airborne plane, and only one man can save the day: Sam Jackson. In the last few months there has been reshoots, re-edits and more CGI to add more gore and violence. For film students I think this will be the classic case study of a movie so bad its good!
3. 'Black Dalhia'
Although Brian De Palma has been behind such drivel as 'Mission to Mars' and 'Snake Eyes', we can only hope his high calibre source material, a 40s based hard-boiled detective novel by James Ellroy, will see him return to his glory days of 'Carrie' and 'Carlitos Way'. Josh Hartnett and Aaron Eckhart play a pair of detectives assigned to the Black Dahlia murder case, a real-life unsolved mystery involving the vicious murder of an aspiring actress in 1947. Scarlett Johansson and Hilary Swank are on hand, as a loyal wife and a high society femme fatale, respectively.
Avoid: 'JackAss No.2'
4. 'The Departed'
I could understand how a movie goer might be sceptical of a remake of a Japanese movie, disregarding it as yet another one of a string of movies (ala 'The Grudge' or 'The Ring') and stories Hollywood has leeched off. Let me tell you though, whatever the end result, on paper this is the zenith of filmmaking: the story of an undercover cop who invades the mob and a mafia-employed mole who infiltrates the police department, sees Martin Scorcese direct Leonardo DiCaprio, Jack Nicholson, Matt Damon, Martin Sheen, Alec Baldwin, Mark Wahlberg, Ray Winstone, and Vera Farmiga. Jack Nicholson said following 9/11 he would not do anymore serious roles, lets hope that what he saw in this script that changed his mind will translate onto screen.
5. 'Hollywoodland'
Yes I am advocating you check the cinema schedules for a Ben affleck movie. Originally titled 'Truth, Justice and the American Way' until copyright intervened, Affleck stars in a biopic of George Reeves, the actor who portrayed Superman in a TV serial in the 1950s and whose death, an alleged suicide, is still considered a mystery. Adrien Brody also stars as a Pivate-eye investigating the death on the wishes of Reeves' mother and Diane Lane plays a movie executives wife with whom Reeves had an affair.
6. 'KillShot'
Here, Director John Madden ("Shakespeare in Love") adapts an Elmore Leonard ("Out of Sight," "Get Shorty") about a husband (Thomas Jane) and wife (Diane Lane) who get mixed up with a con man (Joseph Gordon-Levitt of 'Brick') and an assassin (Mickey Rourke). Rosario Dawson and Johnny Knoxville fill out the cast. Certainly one to watch out for.
7. 'Marie Antoinette'
Sofia Coppola's follow-up to "Lost in Translation" will surely be discussed as a contender for next years Oscars and of course was the talk of the town at Cannes on how it portrayed the infamous French monarch. If however you want a less pretentious reason to go see this movie, Steve Coogan is in it with Kirsten Dunst playing the titular role.
Avoid: 'Saw III'
8. 'Flushed Away'
The people behind "Chicken Run" and "Wallace and Gromit in The Curse of the Were-Rabbit," work their claymation magic on the story of an upper-crust rat who's flushed down the loo, to undergo the inevitable journey of learning and growing as a rat, finding his way home. The Brit-heavy cast features Hugh Jackman, Kate Winslet, Andy Serkis, Jean Reno, Bill Nighy, and Ian McKellen.
9. 'Tenacious D in the Pick of Destiny'
Jack Black continues his warped journey to world domination with a story that sees Black and partner Kyle Gass setting off on a 300-mile journey to steal a legendary guitar pick from a museum. I feel there's a chance this movie will feature some rockin' and hopefully some laughin'. The cast list includes Ben Stiller, Will Ferrell and Tim Robbins.
Avoid: (It hurts me to type this) Hilary and Haylie Duff star in 'Material Girls', a movie about wealthy sisters who loose their fortune.
10. 'Casino Royale'
More than a year after Pierce Brosnan left the series, with a film title, script and director attached (Casino Royale will be an adaptation of Ian Flemings first James Bond novel of the same name) no Bond was yet cast. With Daniel Craig being confirmed there was uproar and his suitability for the role will no doubt be discussed to no end until November. One of the official blurbs is that producers are taking Bond back to his roots ala Batman Begins and giving him a grittier tone in light of the success of the Bourne series and also see an end to the over reliance on gadgets and effects after the disastrous use of CGI in Die Another Day. Casino Royale abandons continuity so that Judi Dench returns as 'M' despite the fact the film is set when he first earns his '00' status (in Goldeneye Dench had just succeeded to the role). Not returning to the series are John Cleese ('Q') and Samantha Bond (Miss Moneypenney), neither character will in fact appear, though there is an op-tec character in the novel so it remains to be seen whether he will be introduced. The movie will be directed by Martin Campbell, who also directed Goldeneye as well as both Zorro movies and the script was brushed up by Million Dollar Baby and Crash writer, Paul Haggis. Craig impressed in Munich and Layer Cake but he has a heavy burden on his shoulders to carry off an event movie as Bond movies are. The trailer is very promising, the only thing that irks me is how the marketers movies missed out on the opportunity of releasing the new 007 in 2007.
11. 'The Fountain'
Darren Aronofsky's long-awaited follow-up to "Requiem for a Dream" comes in the form of a 1000 year odyssey, science fiction story of star -crossed romance. Hugh Jackman and Rachel Weisz star as three pairs of lovers in three different time zones. And by time zones, I mean the years 1500, 2000, and 2500. Brad Pitt abandoned this project just months before shooting was to commence to go an make 'Troy' in 2001 bringing production to a halt. It remains to be seen whether he will regret his decision?
All quiet on the left front.
Hizbollah fire wounds 3 UN peacekeepers in Lebanon. Can't remember reading anything about this in the Irish Newspapers. Can you?
I have to wonder does the fact that Hezbollah rockets are not as good as Israeli missles make them better in the eyes of some people. The differance between the two attacks on UN posts is only that the Israelis had better fire power.
Monday, August 07, 2006
Following my post about physics I thought some of you might like this program about relativity, the big bang and problems with it.
They Don't make videos like this anymore
Joni Mitchell.
Sunday, August 06, 2006
Guess whos back guess whos back
Remember the Direct Action Against Drugs guys who popped up during one of the IRA ceasefires and then disappeared again. Well they are back.
Castro is a strange character that causes people to come out in hot flushes. People on the left hate Bush and McDowell because they deem that they are trying to get rid of peoples human rights yet Castro someone who has actually succeeded in bringing down peoples rights they love.
Now there is no doubt that of all the totalitarian dictators in the world Castro is probably the nicest of them all. Compare the Health service of Cuba with that of North Korea and there is certainly a difference between the two regimes. Also the education system is also not to shabby and the pharmaceutical industry is great. But despite what the people on the left think about all the great parts achievements it is still a regime and that cannot be ignored.
Decent is still not tolerated. You think that Gitmo is bad go to Havana and talk about the merits of a free press and you will get to have a close look at the bottom of soliders boot. Anyone who gives out about Bushes wiretapping should really remember that compared to Castro the Americans are certifiable saints. The Cuban economy is falling apart and is being propped up by Castro’s heir in lefties affections Chavez. Now many people will say that this is due American sanctions and they do have had an effect. But they are not the entire source of the problem. Indeed most of Europe and Canada does trade with Cuba. The reason for the fall in the economy is simply due to the reason that many Lefties will not admit that communism does not work. Not saying that socialism does not work I mean Sweden is not doing to bad for it’s self.
As for Democracy which people of the left say McDowell and Bush dont like I don’t even have to point out the counter argument to that.
What effect Castro has had on the island is very interesting compared to where they were under the previous dictatorship. Now while many people think that what happened then will happen now if the country goes back to capitalism I think the advances made during Castro’s regime have certainly placed it in a position to use its education to become a successful nation. However a continuation of the regime will damage the country. While Castro’s regime is not sunshine and lollipops and I am not legitimising it here, it is better then the previous one.
So what is it about Cuba that people love so much. I think it is a mixture of the fighting the good fight against the big bad Americans and the image of Castro. The romantic image of the Castro smoking a cigar is certainly powerful and who doesn’t like a romantic hero no matter how flawed. People have put their idea for an ideal soceity on Cuba.
Now I could have written more about this but I am a wee bit blogged out at the moment.
The Stewart Effect
Jon Stewart causes young people to develop very cynical views about politicians and politics. Amm this probably has more to do with politicians and the likes of Fox news then anything
I mean just look at this clip. And they wonder why people are cynical. Could have used that one around the time of Richard Bruton and McDowell's little spat.
Friday, August 04, 2006
For Flip sake
In fairness if you are going to not invest in Freeport because of environmental concerns how about stop whaling first.
Thursday, August 03, 2006
Zidane the game
Loads of fun.
One of the things I really like about physics is the artistic beauty of the way the math works. Now I am no theoretical physicist I have always been more towards the experimental side but sometimes when the 2 collide it really makes me see probably what people who love great art see. I can’t really explain the feeling that I can get from it like someone who love art probably can’t either (difference is they try with as many words as possible while physics try using as few symbols as possible although we prove we can use so few symbols with hundreds of symbols J ). But it is I guess the one thing I can relate to when people go on about how great modern art is.
Take for example a femtosecond lasers. Most laser people see are monochromatic i.e. they emitted light of a single wavelength (for the purposes of this post anyway). However when you pulse a laser on and off at high speeds you see something amazing they are white (talking laser in the visible spectrum here there are ultraviolet laser with the same broadband spectrum but the eye can’t see them. But the same principle applies) . Now you can only do this with certain types of lasers and you can’t do it with a laser pointer no matter how hard you try. To pulse at this speed you use various techniques that I wouldn’t get into now but they are pretty cool. By the way an femtosecond is 1*10^-15 or 0.000000000000001 seconds. I.E very short.
So why would it be white I hear you ask well. That comes down to theoretical physics quantum to be precise and Heisenberg’s Uncertainty Principle.
Werner Heisenberg was a German physicist who headed up the Nazis nuclear program. There is much debate over whether he a.)Mis calculated the critical mass needed for a bomb. b.) deliberately slowed down the program to stop the Nazis getting the bomb or c.) Didn’t really care for the bomb and directed the research towards Nuclear power. What ever the reason thankfully Hitler didn’t have a bomb. Heisenbergs as it is known by some. Can be explained by this thought experiment.
If you want to see an electron a photon has to hit off it and then enter your eye. However when it hits it changes the electron. Like a cue ball hitting a snooker ball changes a ball. Thus you cannot be certain of where the ball is. As you are seeing it at the point of impact not where it has been moved to. This can be represented as so.
Where delta (the triangle) is the difference in x (position) and delta p (momentum) is the difference in p. And this is equal to planks constant/2. As this is equal to a constant if one value goes down to keep the equation balanced the other has to go up. For example x+y=10. Lets say x is equal to 4 and y=6. If we make x =3 then for everything to balance y has to go up to 7. So basically the more you know x (position) the less you know about p (momentum).
As the joke goes Heisenburg gets pulled over driving by a cop. The cop says do you know how fast you are going. He replies no but I know where I am.
Anyway this leads on to relativity and this equation derived from the Time Dependent Schrödinger equation. (Schrödinger of the cat fame)
Basically in this one. E is energy and t is time. Thus the more you know about the t the less you know about the E. Now when you are pulsing a laser at attoseconds you know very precisely how long the pulse is so that means you don’t know the value of the energy very well.
E=hv is an equation from Max Plank where h is planks constant and u is the frequency. Frequency and wavelength are related by c=fv. Ok so if you are uncertain about the energy you are uncertain about the frequency and thus uncertain about the wavelength. As you are uncertain of the wavelength. Then it could be any number of wavelengths you simply cannot say as you cannot see a single wavelength you see many.
You see white light. Beautiful
Irish Film Institute now a political organisation?
Due to the current Israeli activities in the Lebanon, the Irish Film Institute has decided to cancel the sponsorship provided by the Embassy of Israel in Ireland for 'Walk on Water', one of the feature films being screened this Friday in the Gay and Lesbian Film Festival.
From here. Are they not sponsored by the Arts Council and Lotto funding which I am guessing is supposed to be non-political funding. Thus making them a non-political organization. So where does this leave us now. Is the film Institute now a political organisation. Can we now vote for or against them?. Is their job no longer to promote film but to promote a political message as well? Or am I being to unreasonable? Hat tip Manhattan Notes.
Why is the Middle East anisotropic
I asked this question before but I think this is a much nicer way to phrase it.
Why does the middle east make the west like this \frac{\omega^4}{c^4} + \frac{\omega^2}{c^2}\left(\frac{k_x^2+k_y^2}{n_z^2}+\frac{k_x^2+k_z^2}{n_y^2}+\frac{k_y^2+k_z^2}{n_x^2}\right) + \left(\frac{k_x^2}{n_y^2n_z^2}+\frac{k_y^2}{n_x^2n_z^2}+\frac{k_z^2}{n_x^2n_y^2}\right)(k_x^2+k_y^2+k_z^2)=0\,.
Rather then like this.
i.e. Polarising
the leb again.
Wednesday, August 02, 2006
One in 4.2 million
Ahern confident of winning next year’s election At least someone is keeping the faith.
An Inconvenient Truth
I'm unsure as to whether 'An Inconvenient Truth', a documentary of the speech on global warming given by Al Gore will be released in Ireland or to what extent but having seen it I would highly recommend it. Gore strives to raise awareness and disipate misunderstandings, continuing a campaign he has been a supporter of from his earliest days in office. The speech is littered with insights into his life, his motivations and of course reflections on his run for the White House. Be sure to keep an eye out for it, particularly on the film festival circuit or in art house cinemas. As a starting point, the website Gore aks attendees and viewers alike to visit is
Good Question
Pakistani women protesters take part in a rally, July 26 in Lahore, Pakistan to condemn the ongoing Israeli strikes against Lebanon and Palestinian territories. (AP Photo/K.M.Chaudary) Hat tip calcon
Tuesday, August 01, 2006
Israel the leb II
I haven't written an opinion here on the Middle east for a while and I have little time to write a big long piece now. But looking at the recent incidents in Lebanon it is clear that Israel is acting too harshly and strongly. As MacDara says "Women and children dead = Women and children dead". Now I know the same thing is happening on the Israeli side with the casualties being minimized not due to Hezbollah decency but a mixture of ineptitude and Israel evacuation and preparation. But these airstrikes are achieving nothing.
Now I have already said . There seems always to have been a belief in air power's and precision weapons ability to win a war. Trying this theory out resulted in Zeppelins, Dresdan and much of the deaths in nam. They won nothing. The same thing is happening with Israel it is achieving nothing but death. If you really want to go after the terrorist you really have to just go in with troops. You can eye ball the missiles to call in the air force and you know the terrorists as they are the ones shooting at you. So this phase of Israelis campaign is probably the most strategically wise.
What is it about the middle East that's gets everyone's goat
I know that the Palestinians and the Lebanese suffer greatly at the hands of the Israelis. I also know that the Israelis are surrounded by people who want to start where Hitler left off. But why is the middle east such a big issue. It gets more coverage, more time dedicated to it, and takes less lives then famines in Africa and dictatorships in North Korea.
Is it a race thing with the left believing that white people can do no right and the right thinking that white people can do no wrong. Is is religion? Is it the big guy vs the small guy. Or is it just instead of the famine in Ethiopia in the 80s the cool thing to be concerned about.
Can anyone explain it to me. |
c0a002a366275fda | Download Q and P college-physics-with-concept-coach-3.3
yes no Was this document useful for you?
Thank you for your participation!
Document related concepts
Theoretical and experimental justification for the Schrödinger equation wikipedia, lookup
Conservation of energy wikipedia, lookup
Woodward effect wikipedia, lookup
Lorentz force wikipedia, lookup
Speed of gravity wikipedia, lookup
Electrical resistance and conductance wikipedia, lookup
Gravity wikipedia, lookup
Weightlessness wikipedia, lookup
Free fall wikipedia, lookup
Electromagnetism wikipedia, lookup
Work (physics) wikipedia, lookup
Anti-gravity wikipedia, lookup
Faster-than-light wikipedia, lookup
Time in physics wikipedia, lookup
Classical central-force problem wikipedia, lookup
Newton's laws of motion wikipedia, lookup
Mass versus weight wikipedia, lookup
Electromagnetic mass wikipedia, lookup
History of thermodynamics wikipedia, lookup
Nuclear physics wikipedia, lookup
Negative mass wikipedia, lookup
Chapter 25 | Geometric Optics
Chapter 25 Homework
Conceptual Questions
25.2 The Law of Reflection
1. Using the law of reflection, explain how powder takes the shine off of a personâs nose. What is the name of the optical effect?
25.3 The Law of Refraction
2. Diffusion by reflection from a rough surface is described in this chapter. Light can also be diffused by refraction. Describe how
this occurs in a specific situation, such as light interacting with crushed ice.
3. Why is the index of refraction always greater than or equal to 1?
4. Does the fact that the light flash from lightning reaches you before its sound prove that the speed of light is extremely large or
simply that it is greater than the speed of sound? Discuss how you could use this effect to get an estimate of the speed of light.
5. Will light change direction toward or away from the perpendicular when it goes from air to water? Water to glass? Glass to
6. Explain why an object in water always appears to be at a depth shallower than it actually is? Why do people sometimes
sustain neck and spinal injuries when diving into unfamiliar ponds or waters?
7. Explain why a personâs legs appear very short when wading in a pool. Justify your explanation with a ray diagram showing the
path of rays from the feet to the eye of an observer who is out of the water.
8. Why is the front surface of a thermometer curved as shown?
Figure 25.47 The curved surface of the thermometer serves a purpose.
9. Suppose light were incident from air onto a material that had a negative index of refraction, say â1.3; where does the refracted
light ray go?
25.4 Total Internal Reflection
10. A ring with a colorless gemstone is dropped into water. The gemstone becomes invisible when submerged. Can it be a
diamond? Explain.
perhaps referring to Figure 25.48. Some of us have seen the formation of a double rainbow. Is it physically possible to observe a
triple rainbow?
Figure 25.48 Double rainbows are not a very common observance. (credit: InvictusOU812, Flickr) |
20403e5432446f49 | Details, Explanation and Meaning About Physics
Physics Guide, Meaning , Facts, Information and Description
Physics (from the Greek, φυσικός (physikos), "natural", and φύσις (physis), "Nature") is the science of Nature in the broadest sense. Physicists study the behavior and properties of matter in a wide variety of contexts, ranging from the sub-microscopic particles from which all ordinary matter is made (particle physics) to the behavior of the material Universe as a whole (cosmology).
Some of the properties studied in physics are common to all material systems, such as the conservation of energy. Such properties are often referred to as laws of physics. Physics is sometimes said to be the "fundamental science", because each of the other natural sciences (biology, chemistry, geology, etc.) deals with particular types of material system that obey the laws of physics. For example, chemistry is the science of molecules and the chemicalss that they form in the bulk. The properties of a chemical are determined by the properties of the underlying molecules, which are accurately described by areas of physics such as quantum mechanics, thermodynamics, and electromagnetism.
Physics is also closely related to mathematics. Physical theories are almost invariably expressed using mathematical relations, and the mathematics involved is generally more complicated than in the other sciences. The difference between physics and mathematics is that physics is ultimately concerned with descriptions of the material world, whereas mathematics is concerned with abstract patterns that need not have any bearing on it. However, the distinction is not always clear-cut. There is a large area of research intermediate between physics and mathematics, known as mathematical physics, devoted to developing the mathematical structure of physical theories.
Table of contents
1 Overview of physics research
2 History
3 Links and references
Overview of physics research
Theoretical and experimental physics
The culture of physics research differs from the other sciences in the separation of theory and experiment. Since the 20th century, most individual physicists have specialized in either theoretical physics or experimental physics, and very few physicists have been successful in both forms of research. In contrast, almost all the successful theorists in biology and chemistry have also been experimentalists.
Roughly speaking, theorists seek to develop theories that can explain existing experimental results and successfully predict future results, while experimentalists devise and perform experiments to test theoretical predictions. Although theory and experiment are developed separately, they are strongly dependent on each other. Progress in physics frequently comes about when experimentalists make a discovery that existing theories cannot account for, necessitating the fomulation of new theories. In the absence of experiment, theoretical research frequently goes in the wrong direction; this is one of the criticisms that have been levelled against M-theory, a popular theory in high-energy physics for which no practical experimental test has ever been devised.
Central theories of physics
While physics deals with an extremely wide variety of systems, there are certain theories that are used by physics as a whole, and not by any single field. Each of these theories is believed to be basically correct, within a certain domain of validity. For instance, the theory of classical mechanics accurately describes the motion of objects, provided they are much larger than atoms and moving at much less than the speed of light. These theories continue to be areas of active research; for instance, a remarkable aspect of classical mechanics known as chaos was discovered in the 20th century, three centuries after its formulation by Isaac Newton. However, few physicists expect any of them to prove fundamentally misguided. Therefore, they are used as the basis for research into more specialized topics, and any contemporary student of physics, regardless of his or her specialization, is generally expected to be well-versed in all of them.
Theory Major subtopics Concepts
Classical mechanics Newton's laws of motion Lagrangian mechanics Hamiltonian mechanics Chaos theory Fluid dynamics Continuum mechanics Dimension Space Time Motion Length Velocity Mass Momentum Force Energy Angular momentum Torque Conservation law Harmonic oscillator Wave Work Power
Electromagnetism Electrostatics Electricity Magnetism Maxwell's equations Electric charge Current Electric field Magnetic field Electromagnetic field Electromagnetic radiation Magnetic monopole
Thermodynamics and Statistical mechanics Heat engine Kinetic theory Boltzmann's constant Entropy Free energy Heat Partition function Temperature
Quantum mechanics Path integral formulation Schrödinger equation Quantum field theory Hamiltonian Identical particles Planck's constant Quantum entanglement Quantum harmonic oscillator Wavefunction Zero-point energy
Theory of relativity Special relativity General relativity Equivalence principle Four-momentum Reference frame Spacetime Speed of light
Major fields of physics
Contemporary research in physics is divided into several distinct fields that study different aspects of the material world. Condensed matter physics, by most estimates the largest single field of physics, is concerned with how the properties of bulk matter, such as the ordinary solids and liquids we encounter in everyday life, arise from the properties and mutual interactions of the constituent atoms. The field of atomic, molecular, and optical physics deals with the behavior of individual atoms and molecules, and in particular the ways in which they absorb and emit light. The field of particle physics, also known as "high-energy physics", is concerned with the properties of submicroscopic particles much smaller than atoms, including the elementary particles from which all other units of matter are constructed. Finally, the field of astrophysics applies the laws of physics to explain astronomical phenomena, ranging from the Sun and the other objects in the solar system to the universe as a whole.
Field Subfields Major theories Concepts
Astrophysics Cosmology, Planetology, Plasma physics Big Bang Cosmic inflation General relativity Law of universal gravitation Black hole Cosmic background radiation Galaxy Gravity Gravitational radiation Planet Solar system Star
Atomic, molecular, and optical physics Atomic physics, Molecular physics, Optics, Photonics Quantum optics Diffraction Electromagnetic radiation Laser Polarization Spectral line
Particle physics Accelerator physics, Nuclear physics Standard Model Grand unification theory Loop quantum gravity M-theory Fundamental force (gravitational, electromagnetic, weak, strong) Elementary particle Antimatter Spin Spontaneous symmetry breaking Theory of everything Vacuum energy
Condensed matter physics Solid state physics, Materials physics, Polymer physics BCS theory Bloch wave Fermi gas Fermi liquid Many-body theory Phases (gas, liquid, solid, Bose-Einstein condensate, superconductor, superfluid) Electrical conduction Magnetism Self-organization Spin Spontaneous symmetry breaking
Related fields
There are many areas of research that mix physics with other disciplines. For example, the wide-ranging field of biophysics is devoted to the role that physical principles play in biological systems, and the field of quantum chemistry studies how the theory of quantum mechanics gives rise to the chemical behavior of atoms and molecules. Some of these are listed below.
Acoustics - Astronomy - Biophysics - Computational physics - Electronics - Engineering - Geophysics - Materials science - Mathematical physics - Medical physics - Physical chemistry - Physics of computation - Vehicle dynamics
Fringe theories
Cold fusion - Dynamic theory of gravity - Luminiferous aether - Orgone energy - Steady state theory
Main article: History of physics. See also Famous physicists and Nobel Prize in Physics.
The United Nations have declared the year 2005 as the World Year of Physics [1].
Future directions
Main article: unsolved problems in physics.
As of 2004, research in physics is progressing on a large number of fronts.
In the rush to solve high-energy, quantum, and astronomical physics, quite a bit of quotidian physics have been left behind. Complex problems that seem like they could be solved by a clever application of dynamics and mechanics, like the formation of sandpiles, nodes in trickling water, the shape of water droplets, mechanisms of surface tension catastrophes, or self-sorting in shaken heterogeneous collections, still remain largely open for characterization.
Links and references
Suggested readings
• Feynman, The Character of Physical Law, Random House (Modern Library), 1994, hardcover, 192 pages, ISBN 0679601279
• Feynman, Leighton, Sands, The Feynman Lectures on Physics, Addison-Wesley 1970, 3 volumes, paperback, ISBN 0201021153. Hardcover commemorative edition, 1989, ISBN 0201500647
• Landau, et. al., Course of Theoretical Physics, Butterworth-Heinemann, 1976, 10 volumes, paperback, ISBN 0750628960
• Walker, The Flying Circus of Physics, Wiley, 1977, paperback, 312 pages, ISBN 047102984X
External links
See also
This is an Article on Physics. Page Contains Information, Facts Details or Explanation Guide About Physics
Search Anything |
57da427ceb6d3d49 |
Read More on This Topic
Babylonian mathematical tablet.
mathematics: Mathematical physics
The scope of physics
The study of gravitation
The study of heat, thermodynamics, and statistical mechanics
First law
Second law
Third law
Statistical mechanics
The study of electricity and magnetism
Although conceived of as distinct phenomena until the 19th century, electricity and magnetism are now known to be components of the unified field of electromagnetism. Particles with electric charge interact by an electric force, while charged particles in motion produce and respond to magnetic forces as well. Many subatomic particles, including the electrically charged electron and proton and the electrically neutral neutron, behave like elementary magnets. On the other hand, in spite of systematic searches undertaken, no magnetic monopoles, which would be the magnetic analogues of electric charges, have ever been found.
The field concept plays a central role in the classical formulation of electromagnetism, as well as in many other areas of classical and contemporary physics. Einstein’s gravitational field, for example, replaces Newton’s concept of gravitational action at a distance. The field describing the electric force between a pair of charged particles works in the following manner: each particle creates an electric field in the space surrounding it, and so also at the position occupied by the other particle; each particle responds to the force exerted upon it by the electric field at its own position.
Classical electromagnetism is summarized by the laws of action of electric and magnetic fields upon electric charges and upon magnets and by four remarkable equations formulated in the latter part of the 19th century by the Scottish physicist James Clerk Maxwell. The latter equations describe the manner in which electric charges and currents produce electric and magnetic fields, as well as the manner in which changing magnetic fields produce electric fields, and vice versa. From these relations Maxwell inferred the existence of electromagnetic waves—associated electric and magnetic fields in space, detached from the charges that created them, traveling at the speed of light, and endowed with such “mechanical” properties as energy, momentum, and angular momentum. The light to which the human eye is sensitive is but one small segment of an electromagnetic spectrum that extends from long-wavelength radio waves to short-wavelength gamma rays and includes X-rays, microwaves, and infrared (or heat) radiation.
Because light consists of electromagnetic waves, the propagation of light can be regarded as merely a branch of electromagnetism. However, it is usually dealt with as a separate subject called optics: the part that deals with the tracing of light rays is known as geometrical optics, while the part that treats the distinctive wave phenomena of light is called physical optics. More recently, there has developed a new and vital branch, quantum optics, which is concerned with the theory and application of the laser, a device that produces an intense coherent beam of unidirectional radiation useful for many applications.
The formation of images by lenses, microscopes, telescopes, and other optical devices is described by ray optics, which assumes that the passage of light can be represented by straight lines, that is, rays. The subtler effects attributable to the wave property of visible light, however, require the explanations of physical optics. One basic wave effect is interference, whereby two waves present in a region of space combine at certain points to yield an enhanced resultant effect (e.g., the crests of the component waves adding together); at the other extreme, the two waves can annul each other, the crests of one wave filling in the troughs of the other. Another wave effect is diffraction, which causes light to spread into regions of the geometric shadow and causes the image produced by any optical device to be fuzzy to a degree dependent on the wavelength of the light. Optical instruments such as the interferometer and the diffraction grating can be used for measuring the wavelength of light precisely (about 500 micrometres) and for measuring distances to a small fraction of that length.
Atomic and chemical physics
One of the great achievements of the 20th century was the establishment of the validity of the atomic hypothesis, first proposed in ancient times, that matter is made up of relatively few kinds of small, identical parts—namely, atoms. However, unlike the indivisible atom of Democritus and other ancients, the atom, as it is conceived today, can be separated into constituent electrons and nucleus. Atoms combine to form molecules, whose structure is studied by chemistry and physical chemistry; they also form other types of compounds, such as crystals, studied in the field of condensed-matter physics. Such disciplines study the most important attributes of matter (not excluding biologic matter) that are encountered in normal experience—namely, those that depend almost entirely on the outer parts of the electronic structure of atoms. Only the mass of the atomic nucleus and its charge, which is equal to the total charge of the electrons in the neutral atom, affect the chemical and physical properties of matter.
Although there are some analogies between the solar system and the atom due to the fact that the strengths of gravitational and electrostatic forces both fall off as the inverse square of the distance, the classical forms of electromagnetism and mechanics fail when applied to tiny, rapidly moving atomic constituents. Atomic structure is comprehensible only on the basis of quantum mechanics, and its finer details require as well the use of quantum electrodynamics (QED).
Atomic properties are inferred mostly by the use of indirect experiments. Of greatest importance has been spectroscopy, which is concerned with the measurement and interpretation of the electromagnetic radiations either emitted or absorbed by materials. These radiations have a distinctive character, which quantum mechanics relates quantitatively to the structures that produce and absorb them. It is truly remarkable that these structures are in principle, and often in practice, amenable to precise calculation in terms of a few basic physical constants: the mass and charge of the electron, the speed of light, and Planck’s constant (approximately 6.62606957 × 10−34 joule∙second), the fundamental constant of the quantum theory named for the German physicist Max Planck.
Condensed-matter physics
This field, which treats the thermal, elastic, electrical, magnetic, and optical properties of solid and liquid substances, grew at an explosive rate in the second half of the 20th century and scored numerous important scientific and technical achievements, including the transistor. Among solid materials, the greatest theoretical advances have been in the study of crystalline materials whose simple repetitive geometric arrays of atoms are multiple-particle systems that allow treatment by quantum mechanics. Because the atoms in a solid are coordinated with each other over large distances, the theory must go beyond that appropriate for atoms and molecules. Thus conductors, such as metals, contain some so-called free electrons, or valence electrons, which are responsible for the electrical and most of the thermal conductivity of the material and which belong collectively to the whole solid rather than to individual atoms. Semiconductors and insulators, either crystalline or amorphous, are other materials studied in this field of physics.
Other aspects of condensed matter involve the properties of the ordinary liquid state, of liquid crystals, and, at temperatures near absolute zero, of the so-called quantum liquids. The latter exhibit a property known as superfluidity (completely frictionless flow), which is an example of macroscopic quantum phenomena. Such phenomena are also exemplified by superconductivity (completely resistance-less flow of electricity), a low-temperature property of certain metallic and ceramic materials. Besides their significance to technology, macroscopic liquid and solid quantum states are important in astrophysical theories of stellar structure in, for example, neutron stars.
Nuclear physics
Like excited atoms, unstable radioactive nuclei (either naturally occurring or artificially produced) can emit electromagnetic radiation. The energetic nuclear photons are called gamma rays. Radioactive nuclei also emit other particles: negative and positive electrons (beta rays), accompanied by neutrinos, and helium nuclei (alpha rays).
A principal research tool of nuclear physics involves the use of beams of particles (e.g., protons or electrons) directed as projectiles against nuclear targets. Recoiling particles and any resultant nuclear fragments are detected, and their directions and energies are analyzed to reveal details of nuclear structure and to learn more about the strong force. A much weaker nuclear force, the so-called weak interaction, is responsible for the emission of beta rays. Nuclear collision experiments use beams of higher-energy particles, including those of unstable particles called mesons produced by primary nuclear collisions in accelerators dubbed meson factories. Exchange of mesons between protons and neutrons is directly responsible for the strong force. (For the mechanism underlying mesons, see below Fundamental forces and fields.)
In radioactivity and in collisions leading to nuclear breakup, the chemical identity of the nuclear target is altered whenever there is a change in the nuclear charge. In fission and fusion nuclear reactions in which unstable nuclei are, respectively, split into smaller nuclei or amalgamated into larger ones, the energy release far exceeds that of any chemical reaction.
Particle physics
One of the most significant branches of contemporary physics is the study of the fundamental subatomic constituents of matter, the elementary particles. This field, also called high-energy physics, emerged in the 1930s out of the developing experimental areas of nuclear and cosmic-ray physics. Initially investigators studied cosmic rays, the very-high-energy extraterrestrial radiations that fall upon the Earth and interact in the atmosphere (see below The methodology of physics). However, after World War II, scientists gradually began using high-energy particle accelerators to provide subatomic particles for study. Quantum field theory, a generalization of QED to other types of force fields, is essential for the analysis of high-energy physics. Subatomic particles cannot be visualized as tiny analogues of ordinary material objects such as billiard balls, for they have properties that appear contradictory from the classical viewpoint. That is to say, while they possess charge, spin, mass, magnetism, and other complex characteristics, they are nonetheless regarded as pointlike.
During the latter half of the 20th century, a coherent picture evolved of the underlying strata of matter involving two types of subatomic particles: fermions (baryons and leptons), which have odd half-integral angular momentum (spin 1/2, 3/2) and make up ordinary matter; and bosons (gluons, mesons, and photons), which have integral spins and mediate the fundamental forces of physics. Leptons (e.g., electrons, muons, taus), gluons, and photons are believed to be truly fundamental particles. Baryons (e.g., neutrons, protons) and mesons (e.g., pions, kaons), collectively known as hadrons, are believed to be formed from indivisible elements known as quarks, which have never been isolated.
Quarks come in six types, or “flavours,” and have matching antiparticles, known as antiquarks. Quarks have charges that are either positive two-thirds or negative one-third of the electron’s charge, while antiquarks have the opposite charges. Like quarks, each lepton has an antiparticle with properties that mirror those of its partner (the antiparticle of the negatively charged electron is the positive electron, or positron; that of the neutrino is the antineutrino). In addition to their electric and magnetic properties, quarks participate in both the strong force (which binds them together) and the weak force (which underlies certain forms of radioactivity), while leptons take part in only the weak force.
Baryons, such as neutrons and protons, are formed by combining three quarks—thus baryons have a charge of −1, 0, or 1. Mesons, which are the particles that mediate the strong force inside the atomic nucleus, are composed of one quark and one antiquark; all known mesons have a charge of −2, −1, 0, 1, or 2. Most of the possible quark combinations, or hadrons, have very short lifetimes, and many of them have never been seen, though additional ones have been observed with each new generation of more powerful particle accelerators.
The quantum fields through which quarks and leptons interact with each other and with themselves consist of particle-like objects called quanta (from which quantum mechanics derives its name). The first known quanta were those of the electromagnetic field; they are also called photons because light consists of them. A modern unified theory of weak and electromagnetic interactions, known as the electroweak theory, proposes that the weak force involves the exchange of particles about 100 times as massive as protons. These massive quanta have been observed—namely, two charged particles, W+ and W, and a neutral one, W0.
In the theory of the strong force known as quantum chromodynamics (QCD), eight quanta, called gluons, bind quarks to form baryons and also bind quarks to antiquarks to form mesons, the force itself being dubbed the “colour force.” (This unusual use of the term colour is a somewhat forced analogue of ordinary colour mixing.) Quarks are said to come in three colours—red, blue, and green. (The opposites of these imaginary colours, minus-red, minus-blue, and minus-green, are ascribed to antiquarks.) Only certain colour combinations, namely colour-neutral, or “white” (i.e., equal mixtures of the above colours cancel out one another, resulting in no net colour), are conjectured to exist in nature in an observable form. The gluons and quarks themselves, being coloured, are permanently confined (deeply bound within the particles of which they are a part), while the colour-neutral composites such as protons can be directly observed. One consequence of colour confinement is that the observable particles are either electrically neutral or have charges that are integral multiples of the charge of the electron. A number of specific predictions of QCD have been experimentally tested and found correct.
Quantum mechanics
Although the various branches of physics differ in their experimental methods and theoretical approaches, certain general principles apply to all of them. The forefront of contemporary advances in physics lies in the submicroscopic regime, whether it be in atomic, nuclear, condensed-matter, plasma, or particle physics, or in quantum optics, or even in the study of stellar structure. All are based upon quantum theory (i.e., quantum mechanics and quantum field theory) and relativity, which together form the theoretical foundations of modern physics. Many physical quantities whose classical counterparts vary continuously over a range of possible values are in quantum theory constrained to have discontinuous, or discrete, values. Furthermore, the intrinsically deterministic character of values in classical physics is replaced in quantum theory by intrinsic uncertainty.
According to quantum theory, electromagnetic radiation does not always consist of continuous waves; instead it must be viewed under some circumstances as a collection of particle-like photons, the energy and momentum of each being directly proportional to its frequency (or inversely proportional to its wavelength, the photons still possessing some wavelike characteristics). Conversely, electrons and other objects that appear as particles in classical physics are endowed by quantum theory with wavelike properties as well, such a particle’s quantum wavelength being inversely proportional to its momentum. In both instances, the proportionality constant is the characteristic quantum of action (action being defined as energy × time)—that is to say, Planck’s constant divided by 2π, or ℏ.
In principle, all of atomic and molecular physics, including the structure of atoms and their dynamics, the periodic table of elements and their chemical behaviour, as well as the spectroscopic, electrical, and other physical properties of atoms, molecules, and condensed matter, can be accounted for by quantum mechanics. Roughly speaking, the electrons in the atom must fit around the nucleus as some sort of standing wave (as given by the Schrödinger equation) analogous to the waves on a plucked violin or guitar string. As the fit determines the wavelength of the quantum wave, it necessarily determines its energy state. Consequently, atomic systems are restricted to certain discrete, or quantized, energies. When an atom undergoes a discontinuous transition, or quantum jump, its energy changes abruptly by a sharply defined amount, and a photon of that energy is emitted when the energy of the atom decreases, or is absorbed in the opposite case.
Although atomic energies can be sharply defined, the positions of the electrons within the atom cannot be, quantum mechanics giving only the probability for the electrons to have certain locations. This is a consequence of the feature that distinguishes quantum theory from all other approaches to physics, the uncertainty principle of the German physicist Werner Heisenberg. This principle holds that measuring a particle’s position with increasing precision necessarily increases the uncertainty as to the particle’s momentum, and conversely. The ultimate degree of uncertainty is controlled by the magnitude of Planck’s constant, which is so small as to have no apparent effects except in the world of microstructures. In the latter case, however, because both a particle’s position and its velocity or momentum must be known precisely at some instant in order to predict its future history, quantum theory precludes such certain prediction and thus escapes determinism.
The complementary wave and particle aspects, or wave–particle duality, of electromagnetic radiation and of material particles furnish another illustration of the uncertainty principle. When an electron exhibits wavelike behaviour, as in the phenomenon of electron diffraction, this excludes its exhibiting particle-like behaviour in the same observation. Similarly, when electromagnetic radiation in the form of photons interacts with matter, as in the Compton effect in which X-ray photons collide with electrons, the result resembles a particle-like collision and the wave nature of electromagnetic radiation is precluded. The principle of complementarity, asserted by the Danish physicist Niels Bohr, who pioneered the theory of atomic structure, states that the physical world presents itself in the form of various complementary pictures, no one of which is by itself complete, all of these pictures being essential for our total understanding. Thus both wave and particle pictures are needed for understanding either the electron or the photon.
Although it deals with probabilities and uncertainties, the quantum theory has been spectacularly successful in explaining otherwise inaccessible atomic phenomena and in thus far meeting every experimental test. Its predictions, especially those of QED, are the most precise and the best checked of any in physics; some of them have been tested and found accurate to better than one part per billion.
Relativistic mechanics
In classical physics, space is conceived as having the absolute character of an empty stage in which events in nature unfold as time flows onward independently; events occurring simultaneously for one observer are presumed to be simultaneous for any other; mass is taken as impossible to create or destroy; and a particle given sufficient energy acquires a velocity that can increase without limit. The special theory of relativity, developed principally by Albert Einstein in 1905 and now so adequately confirmed by experiment as to have the status of physical law, shows that all these, as well as other apparently obvious assumptions, are false.
Specific and unusual relativistic effects flow directly from Einstein’s two basic postulates, which are formulated in terms of so-called inertial reference frames. These are reference systems that move in such a way that in them Isaac Newton’s first law, the law of inertia, is valid. The set of inertial frames consists of all those that move with constant velocity with respect to each other (accelerating frames therefore being excluded). Einstein’s postulates are: (1) All observers, whatever their state of motion relative to a light source, measure the same speed for light; and (2) The laws of physics are the same in all inertial frames.
The first postulate, the constancy of the speed of light, is an experimental fact from which follow the distinctive relativistic phenomena of space contraction (or Lorentz-FitzGerald contraction), time dilation, and the relativity of simultaneity: as measured by an observer assumed to be at rest, an object in motion is contracted along the direction of its motion, and moving clocks run slow; two spatially separated events that are simultaneous for a stationary observer occur sequentially for a moving observer. As a consequence, space intervals in three-dimensional space are related to time intervals, thus forming so-called four-dimensional space-time.
The second postulate is called the principle of relativity. It is equally valid in classical mechanics (but not in classical electrodynamics until Einstein reinterpreted it). This postulate implies, for example, that table tennis played on a train moving with constant velocity is just like table tennis played with the train at rest, the states of rest and motion being physically indistinguishable. In relativity theory, mechanical quantities such as momentum and energy have forms that are different from their classical counterparts but give the same values for speeds that are small compared to the speed of light, the maximum permissible speed in nature (about 300,000 kilometres per second, or 186,000 miles per second). According to relativity, mass and energy are equivalent and interchangeable quantities, the equivalence being expressed by Einstein’s famous mass-energy equation E = mc2, where m is an object’s mass and c is the speed of light.
The general theory of relativity is Einstein’s theory of gravitation, which uses the principle of the equivalence of gravitation and locally accelerating frames of reference. Einstein’s theory has special mathematical beauty; it generalizes the “flat” space-time concept of special relativity to one of curvature. It forms the background of all modern cosmological theories. In contrast to some vulgarized popular notions of it, which confuse it with moral and other forms of relativism, Einstein’s theory does not argue that “all is relative.” On the contrary, it is largely a theory based upon those physical attributes that do not change, or, in the language of the theory, that are invariant.
Conservation laws and symmetry
Fundamental forces and fields
Learn More in these related Britannica articles:
More About Physics
25 references found in Britannica articles
Edit Mode
Tips For Editing
Thank You for Your Contribution!
Uh Oh
Keep Exploring Britannica
Email this page |
520f0be82c44fd09 | Ars Technica
Turbulence, the oldest unsolved problem in physics
Enlarge / "Please prepare the cabin for technical discussions of physics..."
Enlarge / Werner Heisenberg.
An undefined definition
Hokusai’s “Great Wave.”
Reasons for and against "mission complete" aside, why is the turbulence problem so hard? The best answer comes from looking at both the history and current research directed at what Richard Feynman once called “the most important unsolved problem of classical physics.”
The most commonly used formula for describing fluid flow is the Navier-Stokes equation. This is the equation you get if you apply Newton’s first law of motion, F = ma (force = mass × acceleration), to a fluid with simple material properties, excluding elasticity, memory effects, and other complications. Complications like these arise when we try to accurately model the flows of paint, polymers, some biological fluids such as blood (there are many other substances also that violate the assumptions of the Navier-Stokes equations). But for water, air, and other simple liquids and gases, it’s an excellent approximation.
The Navier-Stokes equation is difficult to solve because it is nonlinear. This word is thrown around quite a bit, but here it means something specific. You can build up a complicated solution to a linear equation by adding up many simple solutions. An example you may be aware of is sound: the equation for sound waves is linear, so you can build up a complex sound by adding together many simple sounds of different frequencies (“harmonics”). Elementary quantum mechanics is also linear; the Schrödinger equation allows you to add together solutions to find a new solution.
But fluid dynamics doesn’t work this way: the nonlinearity of the Navier-Stokes equation means that you can’t build solutions by adding together simpler solutions. This is part of the reason that Heisenberg’s mathematical genius, which served him so well in helping to invent quantum mechanics, was put to such a severe test when it came to turbulence.
Heisenberg was forced to make various approximations and assumptions to make any progress with his thesis problem. Some of these were hard to justify; for example, the applied mathematician Fritz Noether (a brother of Emmy Noether) raised prominent objections to Heisenberg’s turbulence calculations for decades before finally admitting that they seemed to be correct after all.
(The situation was so hard to resolve that Heisenberg himself said, while he thought his methods were justified, he couldn’t find the flaw in Fritz Noether’s reasoning, either!)
The cousins of the Navier-Stokes equation that are used to describe more complex fluids are also nonlinear, as is a simplified form, the Euler equation, that omits the effects of friction. There are cases where a linear approximation does work well, such as flow at extremely slow speeds (imagine honey flowing out of a jar), but this excludes most problems of interest including turbulence.
Who's down with CFD?
Despite the near impossibility of finding mathematical solutions to the equations for fluid flows under realistic conditions, science still needs to get some kind of predictive handle on turbulence. For this, scientists and engineers have turned to the only option available when pencil and paper failed them—the computer. These groups are trying to make the most of modern hardware to put a dent in one of the most demanding applications for numerical computing: calculating turbulent flows.
The need to calculate these chaotic flows has benefited from (and been a driver of) improvements in numerical methods and computer hardware almost since the first giant computers appeared. The field is called computational fluid dynamics, often abbreviated as CFD.
Enlarge / A possible grid for calculating the flow over an airfoil.
Early in the history of CFD, engineers and scientists applied straightforward numerical techniques in order to try to directly approximate solutions to the Navier-Stokes equations. This involves dividing up space into a grid and calculating the fluid variables (pressure, velocity) at each grid point. The problem of the large range of spatial scales immediately makes this approach expensive: you need to find a solution where the flow features are accurate for the largest scales—meters for pipes, thousands of kilometers for weather, and down to near the molecular scale. Even if you cut off the length scale at the small end at millimeters or centimeters, you will still need millions of grid points.
One approach to getting reasonable accuracy with a manageable-sized grid begins with the realization that there are often large regions where not much is happening. Put another way, in regions far away from solid objects or other disturbances, the flow is likely to vary slowly in both space and time. All the action is elsewhere; the turbulent areas are usually found near objects or interfaces.
If we take another look at our airfoil and imagine a uniform flow beginning at the left and passing over it, it can be more efficient to concentrate the grid points near the object, especially at the leading and trailing edges, and not “waste” grid points far away from the airfoil. The next figure shows one possible gridding for simulating this problem.
Enlarge / A non-uniform grid for calculating the flow over an airfoil.
This is the simplest type of 2D non-uniform grid, containing nothing but straight lines. The state of the art in nonuniform grids is called adaptive mesh refinement (AMR), where the mesh, or grid, actually changes and adapts to the flow during the simulation. This concentrates grid points where they are needed, not wasting them in areas of nearly uniform flow. Research in this field is aimed at optimizing the grid generation process while minimizing the artificial effects of the grid on the solution. Here it’s used in a NASA simulation of the flow around an oscillating rotor blade. The color represents vorticity, a quantity related to angular momentum.
Enlarge / Using AMR to simulate the flow around a rotor blade.
Neal M. Chaderjian, NASA/Ames
Enlarge / AMR simulation of flow past a sphere.
Mavriplis CFD Lab
Theory and experiment
Enlarge / Exact solution for flow between plates.
Enlarge / Transition to turbulence in a wall-bounded flow.
Lee Phillips is a physicist and a repeat-contributor to Ars Technica. In the past, he's written about topics ranging from the legacy of the Fortran coding language to how Emmy Noether changed physics. |
2274f996a3d94983 | Support Aeon
Roland M, USA, Friend of Aeon
But we can’t do it without you.
Make a donation
Make a donation
Photo by Andrea Buso/Gallery Stock
Through two doors
How a sunbeam split in two became physics’ most elegant experiment, shedding light on the underlying nature of reality
Anil Ananthaswamy
Photo by Andrea Buso/Gallery Stock
Anil Ananthaswamy
is is an award-winning journalist and former staff writer and deputy news editor for New Scientist in London. His work has appeared in Nature, The Wall Street Journal and National Geographic News, among others. His latest book is Through Two Doors at Once (2018). He lives in Bangalore in India and Berkeley in California.
4,400 words
Edited by Pam Weintraub
Syndicate this Essay
Imagine throwing a baseball and not being able to tell exactly where it’ll go, despite your ability to throw accurately. Say that you are able to predict only that it will end up, with equal probability, in the mitt of one of five catchers. The baseball randomly materialises in one catcher’s mitt, while the others come up empty. And before it’s caught, you cannot talk of the baseball being real – for it has no deterministic trajectory from thrower to catcher. Until it becomes ‘real’, the ball can potentially appear in any one of the five mitts. This might seem bizarre, but the subatomic world studied by quantum physicists behaves in this counterintuitive way.
Microscopic particles, governed by the laws of quantum mechanics, throw up some of the biggest questions about the nature of our underlying reality. Do we live in a universe that is deterministic – or given to chance and the rolls of dice? Does reality at the smallest scales of nature exist independent of observers or observations – or is reality created upon observation? And are there ‘spooky actions at a distance’, Albert Einstein’s phrase for how one particle can influence another particle instantaneously, even if the two particles are miles apart.
As profound as these questions are, they can be asked and understood – if not yet satisfactorily answered – by looking at modern variations of a simple experiment that began as a study of the nature of light more than 200 years ago. It’s called the double-slit experiment, and its findings course through the veins of experimental quantum physics. The American physicist Richard Feynman in 1965 said that this experiment ‘has in it the heart of quantum mechanics’. Werner Heisenberg, the German physicist and founding member of quantum physics, would often refer to this strange experiment in his discussions with others to ‘concentrate the poison of the paradox’ thrown up by nature at the smallest scales.
In its simplest form, the experiment involves sending individual particles such as photons or electrons, one at a time, through two openings or slits cut into an otherwise opaque barrier. The particle lands on an observation screen on the other side of the barrier. If you look to see which slit the particle goes through (our intuition, honed by living in the world we do, says it must go through one or the other), the particle behaves like, well, a particle, and takes one of the two possible paths. But if one merely monitors the particle landing on the screen after its journey through the slits, the photon or electron seems to behave like a wave, ostensibly going through both slits at once.
When microscopic entities have the option of doing many things at once – like that metaphysical baseball – they seem to indulge in all possibilities. Such behaviour is impossible to visualise. Common sense fails us when dealing with the world of the quantum. To explain the outcome of something as simple as a particle encountering two slits, quantum physics falls back on mathematical equations. But unlike in classical physics, where the equations let us calculate, say, the precise trajectory of a baseball, the equations of quantum physics allow us to make only probabilistic statements about what will happen to the photon or electron. Crucially, these equations paint no clear picture about what is actually happening to the particles between the source and the screen.
It’s no wonder then that different interpretations of the double-slit experiment offer alternative perspectives on reality. For example, in the late 1920s and early ’30s, some physicists made the startling claim that a particle going through two slits has no clear path or indeed no objective reality until one observes it on a screen on the other side. At a gathering of physicists and philosophers at the Carlsberg mansion near Copenhagen in 1936, the Dutch physicist Hendrik Casimir recalled someone protesting: ‘But the electron must be somewhere on its road from source to observation screen.’ To which Niels Bohr, one of the founders of quantum mechanics, replied that the answer depends on the meaning of the phrase ‘to be’. In other words, what does it mean to say that something exists? One philosopher in the group that day, the Danish logical positivist Jørgen Jørgensen, retorted in exasperation: ‘One can, damn it, not reduce the whole of philosophy to a screen with two holes.’
Yet it is extraordinary just how much of quantum physics and philosophy can be understood using a screen with two holes – or variations thereof. The history of the double-slit experiment goes back to the early 1800s, when physicists were debating the nature of light. Does light behave like a wave or is it made of particles? The latter view had been advocated in the 17th century by no less a physicist than Isaac Newton. Light, Newton said, is corpuscular, or constituted of particles. The Dutch scientist Christiaan Huygens argued otherwise. Light, he said, is a wave – the name given to the vibrations of the medium in which the wave is travelling. For example, a wave in water is essentially the way water moves up and down as the wave propagates. Huygens argued that light is vibrations in an all-pervading ether.
In the first years of the 19th century, the English polymath Thomas Young seemingly settled the debate. He was the first to perform an experiment with a ray of sunlight, a sunbeam, through two narrow slits. On a screen on the other side, he observed not two strips of light – as you’d expect if light is made of particles going through one slit or the other – but a pattern of alternating bright and dark fringes, characteristic of two sets of waves interacting with each other.
Fig 442 from Thomas Young’s ‘Lectures’ published in 1807 detailing his original ‘two-slit’ experiment. Courtesy Wikimedia.
Imagine an ocean wave hitting a coastal breakwall that has two openings. New waves spread out from each opening and head toward the coast. These waves eventually overlap and interfere – at some places constructively (where the crest of one wave meets the crest of another), and at some places destructively (the crest of one wave encounters the trough of another). In Young’s experiment, he saw similar interference. The fringes that he observed had bright regions indicative of constructive interference and dark regions typical of destructive interference.
This view of light as a wave gained strong mathematical support when the Scottish physicist James Clerk Maxwell developed his theory of electromagnetism in the 1860s, showing that light, too, is an electromagnetic wave.
That would have been the end of story – if not for the birth of quantum physics, which began with the German physicist Max Planck’s argument in 1900 that energy comes in quanta, or tiny, indivisible units. Then, in 1905, Einstein studied the photoelectric effect, in which light falling on certain metals dislodges electrons; the effect can be explained only if light is also made of quanta, with each quantum of light analogous to a particle. These quanta of light came to be called photons.
Now, the double-slit experiment gets maddeningly counterintuitive.
Imagine beaming light at two slits one quantum, or particle, at a time. Our classical sensibilities tell us that the photon has to go through one slit or the other. And on the screen on the other side (say, a photographic plate that records photons as they arrive one by one), each photon creates a spot, and we expect these spots to pile up behind the two slits and form two bright strips.
But it’s the quantum world, so of course that’s not what happens.
As the photons land on the photographic plate, over time an interference pattern emerges. Each photon goes only to certain places on the plate – to regions that would represent constructive interference if light were a wave. The photons mostly avoid regions that represent destructive interference. It’s a clear sign of interference and wave-like behaviour.
But our source is emitting light one photon at a time. The photographic plate is recording its arrival as an individual particle. And – this is crucial – the photons are going through the apparatus one at a time. There’s no interaction between one photon and the next, or the first photon and the 10th, and so on. So, what’s interfering with what?
This is where the mathematics comes in. In the mid-1920s, a few fabulously talented physicists, among them Heisenberg, Pascual Jordan, Max Born and Paul Dirac in one group, and Erwin Schrödinger on his own, developed two ways of mathematically depicting the behaviour of the quantum underworld. These two ways turned out to be equivalent. It boils down to this: the state of any quantum system is represented by a mathematical abstraction called a wavefunction. There is a single equation – called the Schrödinger equation – which tells us how this wavefunction, and hence the state of the quantum system, changes with time. This is what allows physicists to predict the probabilities of experiment outcomes.
The wavefunction starts to spread, as a wave would, with different values at different places
In the context of the double-slit experiment, think of the wavefunction as an undulating surface that encodes information about the location of the photon. When the photon emerges from its source, the wavefunction is peaked at one location, and nearly zero everywhere else, suggesting that the photon is localised near the source.
But now mathematics kicks in. The progress of the photon can be captured by the Schrödinger equation, which reveals how the wavefunction evolves with time. The wavefunction starts to spread, as a wave would, with different values at different places. These values are related to the probabilities of finding the particle in those locations, should you choose to look.
As this wavefunction spreads, it encounters the two slits. And just like a wave of water hitting two openings in the breakwall, the wavefunction (which, don’t forget, is a mathematical abstraction) splits: one component goes through the left slit and the other through the right slit. Two wavefunctions emerge from the other side, and each spreads and evolves, still according to the Schrödinger equation. All this is deterministic and predictable. By the time the individual wavefunctions reach the photographic plate, they have spread out enough to start interfering with each other like the waves in the ocean. The photon’s state is now given by a wavefunction that is a combination of the two components’ interfering wavefunctions: the photon itself is now said to be in a ‘superposition’ of having gone through both slits. At the photographic plate, upon detection, this combined wavefunction again peaks in one location and goes to more or less zero everywhere else. The photon is registered at that location.
It all seems to make sense – sort of – until you start digging into the mathematical equations. What’s a wavefunction and what does it mean for a wavefunction to go through two slits? Is the wavefunction something real? And how does one figure out where the wavefunction will peak when it encounters the photographic plate? Why does it peak there and not elsewhere?
In the equations of quantum mechanics, the wavefunction is, well, a mathematical function. For any quantum system with more than two particles, the wavefunction does not live in the three familiar spatial dimensions of our world. Rather, it exists in something called a configuration space (an abstract mathematical space, the number of dimensions of which mushrooms with increasing number of particles, but we can ignore that for now).
In the summer of 1926, just months after Schrödinger came up with his eponymous equation for the evolution of the wavefunction, Born figured out that the value of the wavefunction at a given point in space and time can be used to calculate the probability of, for example, finding the photon at that location. For the double slit, it turns out that the probabilities are very low for regions where the two components of the wavefunction interfere destructively, and high for regions where they interfere constructively.
All this seems tantalisingly understandable, but upon closer examination more questions appear. Did the photon go through both slits at once? Does the photon have a trajectory, as it leaves the source and is eventually detected at the photographic plate? And given that the mathematics says that there are many regions where the photon can be found with a non-zero probability, why does it end up in one of those regions and not others? Finally, if the photon didn’t go through both slits, but rather the wavefunction did, is the wavefunction real?
Trying to answer such questions takes us into the heart of what’s confounding about quantum mechanics, and brings us in contact with profound philosophical issues about the nature of reality.
Take the question of determinism. When you throw a baseball in the classical world, physics will tell you where it will land. Not so in the quantum realm. The wavefunction cannot predict the exact location at which the photon will land – only its probability of landing at any one of a number of spots. For any given photon, you can never predict with certainty where it will be found: all you can say is that it will be found in region A with probability X, or in region B with probability Y, and so on. These probabilities are born out when you do the experiment numerous times with identical photons, but the precise destiny of an individual photon is not for us to know. Nature at its most fundamental seems indeterminate, random.
The double-slit experiment also allows us to explore notions of realism, the idea that an objective reality exists independent of observers or observation. Recall Casimir’s account of the gathering at the Carlsberg mansion: common sense tells us that the photon must have a clear path from the source to the photographic plate. But the mathematical formalism of standard quantum mechanics does not have a variable that captures the position of a particle as it moves – only a starting point and an end point that is contingent upon observation. And so, the photon does not have a trajectory. In fact, in one way of interpreting the formalism – named the Copenhagen interpretation after the place where it took shape – the photon has no objective reality until it lands on the photographic plate. At its extreme, the Copenhagen interpretation is often said to be antirealist. More generally, antirealism takes the position that reality does not exist independent of an observer (an observer does not necessarily mean a conscious human, it could be a photographic plate; opinions vary on this).
Einstein was a realist. He was adamant that standard quantum mechanics is incomplete, in that it lacks the necessary variables to capture trajectory – the position and momentum of a particle as it moves. Einstein was also an avowed adherent of the principle of locality: the notion that something happening in one place cannot influence something happening elsewhere any faster than the speed of light. Taken together, this philosophical position is called local realism.
The opposite of locality – nonlocality – gets highlighted by something as simple as the double-slit experiment. When the photon’s wavefunction nears the photographic plate, the photon is in a quantum superposition of being in many places at once (this is not to say that the photon actually is in these places simultaneously, it’s just a way of talking about the mathematics; the photon itself is not yet ascribed reality in the standard way of thinking about it). Upon observation, the wavefunction is said to collapse, in that its value peaks at one location and goes to near-zero elsewhere. The photon is localised – and thus found to be at one of its many possible locations.
Surely a photon, which cannot be divided into further smaller parts, cannot go through both slits at once?
Einstein pointed out a problem with this scenario (he used a slightly different thought experiment than the double-slit, but the conceptual arguments are the same). If the wavefunction is something real – or is part of the ontology of the world, in the lingo of philosophers – then its collapse is a nonlocal event. A measurement caused the wavefunction to peak in one location and simultaneously go to zero elsewhere. In principle, the wavefunction could be spread across kilometres, and this scenario would still hold. Regions of spacetime far separated from each other would be instantly influenced by the measurement-induced collapse in one location.
There is another way to think about the wavefunction that avoids this difficultly. Many followers of standard quantum mechanics would say that the wavefunction is epistemic – it merely captures our knowledge about the reality. If so, the collapse is merely a sharpening of our knowledge about reality, and so it’s not a physical event and hence does not imply nonlocality.
But if the wavefunction is not real – then what goes through the two slits? Surely a photon, which cannot be divided any further into smaller parts, cannot go through both slits at once? Something must traverse both slits simultaneously to generate the interference pattern. If not the photon or its wavefunction, what else could it be? Epistemic or ontological, the questions about the wavefunction remain.
Besides the status of the wavefunction, perhaps the most well-known issue accentuated by the double-slit experiment is how something in the quantum realm can sometimes act like a wave and sometimes like a particle, a phenomenon called wave-particle duality. If we don’t care about knowing which slit a photon goes through, the photon behaves like a wave, and lands on a certain part of the photographic plate. When enough photons land on such parts, constructive interference fringes appear. Crucially, the photons almost never go to regions that will remain dark.
But our classical minds rebel. We cannot disregard the conviction that the photon has to go through one slit or the other. So we put detectors next to the slits (let’s assume that our detectors work without destroying the photons). Something weird happens. The photons will now go through one or the other slit. Curiously, this time they will not form an interference pattern. They act like particles and they will go to those regions on the photographic plate that they shunned when acting like a wave.
When Einstein and Bohr were arguing about the double-slit experiment, it was thought that the physical disturbance produced by the act of looking causes the interference pattern to disappear.
But since then, it’s become clear that the problem runs deeper. Experimentalists devised ways of determining which slit a photon goes through without, ostensibly, disturbing it. Turns out that the mere presence of this ‘which-way’ (welcher-weg in German) information in the environment – something that in principle can be extracted – destroys the interference pattern. The photons behave like particles.
Wave-particle duality was pushed to the extreme by ever more sophisticated versions of the double-slit experiment. In 1982, Marlan Scully, who was at the University of New Mexico, and Kai Drühl, of the Max Planck Institute for Quantum Optics in Munich, came up with one of the most memorable variations. What if there is a way to first collect information about which way a photon goes (causing it to act like a particle) and then erase this information? Would erasing the information cause the photons to act like waves – even after they presumably went through one slit or another as a particle and landed on the photographic plate?
The empirical answer to this famous quantum eraser question is an unequivocal yes. In 2000, Scully teamed up with Yoon-Ho Kim and colleagues at the University of Maryland in Baltimore and did this experiment using atoms that could be made to emit a pair of entangled photons (in mathematical terms, two entangled photons are described by the same wavefunction, so an action on one photon immediately influences the other, because of the collapse of the single wavefunction. Nonlocality is an explicit aspect of this version of the double-slit experiment). The experimental setup was engineered in such a way that if one photon of the entangled pair went through a double slit, the partner photon could be used to extract ‘which-way’ information about the first photon. Scully and colleagues showed that if you erased this information, the first photon acted like a wave; otherwise it acted like a particle.
It was clear that whether a photon behaves like a wave or a particle depends on the choice of the experimental setup. Based on this finding, in 1978 the American physicist John Wheeler dreamed up perhaps the most famous version of the double-slit experiment, which he called ‘delayed choice’.
Wheeler’s bright idea was to ask: what if we delayed the choice of the type of experiment to perform until after the photon had entered the apparatus? Say it enters an apparatus that is configured to look for the photon’s wave nature. So, the photon should – according to the standard way of thinking – go into a superposition of taking two paths. If the two paths are recombined, they interfere, and we get fringes. Now, said Wheeler, let’s perform a sleight of hand. Just before the photon is detected, let’s reconfigure the apparatus so that it’s now looking for the photon’s particle nature. This can be done by taking out the device that causes the paths to recombine, thus letting each path go on its way to separate detectors.
As it happens, you cannot fool the photon no matter how hard you try. Experimentalists have performed Wheeler’s thought experiment with increasing precision and sophistication – and the quantum world rules. When they remove, at the very last instant, the device that recombines the two photon paths, the photon acts like a particle, suggesting that it took one path or the other, even though at the start it should have entered a superposition of taking both paths at once. Based on such results, Wheeler argued that the photon has no intrinsic nature – either wave or particle – before it’s detected. Otherwise, if it entered the apparatus like a particle and you did a switcheroo nanoseconds before detection and chose to look at its wave nature, it would have to go back in time and re-enter the apparatus as a wave. How else can you explain the observed interference pattern? In Wheeler’s account, denying the photon a reality independent of observation avoids postulating absurd time-travelling photons, but then you have to live with the antirealism of standard quantum mechanics, which some find unpalatable.
Experimentalists have also combined delayed-choice and quantum-erasure experiments into one mind-boggling delayed-choice quantum-erasure experiment – in which you not only delay the choice of what to see (particle or wave nature), but you can also randomly erase this choice. Again, the photon or any quantum system will show you only one face or the other – and what it reveals depends on the final state of the experimental apparatus.
All particles in the Universe are influenced instantly by a form of nonlocality that would make Einstein wince
Such experiments suggest that the act of measurement collapses the wavefunction, but what does collapse really mean? Even more enigmatically, does collapse ultimately need observation by a conscious human being? (To be clear, almost no physicist today thinks that this is the case.)
To avoid the common sense-defying conceptual problems of standard quantum mechanics, there have been myriad attempts to reinterpret the results and pose new theories. One of these efforts is the so-called de Broglie-Bohm theory, which holds that reality is both a wave and a particle. In this theory, a particle is real and has a definite position at all times, and hence a trajectory; but the particle is guided by a pilot wave that evolves according to the Schrödinger equation. In the context of a double-slit experiment, the particle always goes through one slit or the other, but the pilot wave, or the wavefunction, goes through both and interferes with itself on the other side of the slits, and this interference pattern guides the particle to the photographic plate. The de Broglie-Bohm theory is realist: both particles and wavefunctions are real. But the theory is nonlocal. All particles – no matter where they are in the Universe – are influenced instantly by the evolving wavefunction, a form of extreme nonlocality that would make Einstein wince.
Yet another approach invokes a spontaneous collapse of the wavefunction, independent of observers or observation. Such theories are engineered so that small quantum systems, such as individual particles, can stay in a superposition of states for aeons, but larger agglomerations of particles – such as a cat – cannot, and so will almost instantaneously collapse into one of many probable states. Such theories predict that, as systems get bigger, spontaneous collapses will cause quantum states to become classical; they predict a mass scale at which this happens, dividing quantum reality from the classical world.
The quantum nanophysicist Markus Arndt and his colleagues at the University of Vienna are using the double-slit experiment to probe this divide by sending larger and larger things, such organic macromolecules and even viruses, through a double-slit to look for interference. If they see interference, the process is quantum mechanical. But if they can observe the disappearance of the interference pattern and show that it’s solely because the mass of the object going through the two slits is more than some threshold value, then they can claim to have found the quantum-classical divide. The search continues.
It’s hard to overstate the importance of the double-slit experiment to the entire enterprise of quantum mechanics, despite its astonishing simplicity and elegance.
As Feynman put it during a lecture at Cornell University in New York in 1964: ‘Any … situation in quantum mechanics, it turns out, can always be explained afterwards by saying: “You remember the case of the experiment with the two holes?”’ In 1964, even Feynman couldn’t have known just how important the experiment would turn out to be. But physics has yet to successfully explain the double-slit experiment. The case remains unsolved.
Anil Ananthaswamy
Syndicate this Essay
Enjoyed this article?
Get Aeon straight
to your inbox
Join our newsletter
Aeon is not-for-profit
and free for everyone
Make a donation
Time after time
Paul Halpern
History of Science
Mathematics as thought
Mathematical ideas are some of the most transformative and beautiful in history. So why do they get so little attention?
Mordechai Levy-Eichel |
22f08420dd9c014c |
yes no Was this document useful for you?
Thank you for your participation!
Document related concepts
Theoretical and experimental justification for the Schrödinger equation wikipedia, lookup
Conservation of energy wikipedia, lookup
Woodward effect wikipedia, lookup
Lorentz force wikipedia, lookup
Speed of gravity wikipedia, lookup
Electrical resistance and conductance wikipedia, lookup
Gravity wikipedia, lookup
Weightlessness wikipedia, lookup
Free fall wikipedia, lookup
Electromagnetism wikipedia, lookup
Work (physics) wikipedia, lookup
Anti-gravity wikipedia, lookup
Faster-than-light wikipedia, lookup
Time in physics wikipedia, lookup
Classical central-force problem wikipedia, lookup
Newton's laws of motion wikipedia, lookup
Mass versus weight wikipedia, lookup
Electromagnetic mass wikipedia, lookup
History of thermodynamics wikipedia, lookup
Nuclear physics wikipedia, lookup
Negative mass wikipedia, lookup
Chapter 24 | Electromagnetic Waves
time of 1.00 ns. (a) What is the maximum magnetic field
strength in the wave? (b) What is the intensity of the beam?
(c) What energy does it deliver on a 1.00-mm 2 area?
37. Show that for a continuous sinusoidal electromagnetic
wave, the peak intensity is twice the average intensity (
I 0 = 2I ave ), using either the fact that E 0 = 2E rms , or
B 0 = 2B rms , where rms means average (actually root
mean square, a type of average).
38. Suppose a source of electromagnetic waves radiates
uniformly in all directions in empty space where there are no
absorption or interference effects. (a) Show that the intensity
is inversely proportional to r 2 , the distance from the source
squared. (b) Show that the magnitudes of the electric and
magnetic fields are inversely proportional to r .
39. Integrated Concepts
An LC circuit with a 5.00-pF capacitor oscillates in such a
manner as to radiate at a wavelength of 3.30 m. (a) What is
the resonant frequency? (b) What inductance is in series with
the capacitor?
40. Integrated Concepts
What capacitance is needed in series with an
800 â µH
inductor to form a circuit that radiates a wavelength of 196 m?
41. Integrated Concepts
Police radar determines the speed of motor vehicles using the
same Doppler-shift technique employed for ultrasound in
medical diagnostics. Beats are produced by mixing the
double Doppler-shifted echo with the original frequency. If
1.50Ã10 9 -Hz microwaves are used and a beat frequency
of 150 Hz is produced, what is the speed of the vehicle?
(Assume the same Doppler-shift formulas are valid with the
speed of sound replaced by the speed of light.)
42. Integrated Concepts
Assume the mostly infrared radiation from a heat lamp acts
like a continuous wave with wavelength 1.50 µm . (a) If the
lampâs 200-W output is focused on a personâs shoulder, over
a circular area 25.0 cm in diameter, what is the intensity in
W/m 2 ? (b) What is the peak electric field strength? (c) Find
the peak magnetic field strength. (d) How long will it take to
increase the temperature of the 4.00-kg shoulder by 2.00º C
, assuming no other heat transfer and given that its specific
heat is 3.47Ã10 J/kg â
ºC ?
43. Integrated Concepts
On its highest power setting, a microwave oven increases the
temperature of 0.400 kg of spaghetti by 45.0ºC in 120 s. (a)
What was the rate of power absorption by the spaghetti, given
that its specific heat is 3.76Ã10 J/kg â
ºC ? (b) Find the
average intensity of the microwaves, given that they are
absorbed over a circular area 20.0 cm in diameter. (c) What is
the peak electric field strength of the microwave? (d) What is
its peak magnetic field strength?
44. Integrated Concepts
Electromagnetic radiation from a 5.00-mW laser is
concentrated on a 1.00-mm 2 area. (a) What is the intensity
W/m 2 ? (b) Suppose a 2.00-nC static charge is in the
beam. What is the maximum electric force it experiences? (c)
If the static charge moves at 400 m/s, what maximum
magnetic force can it feel?
45. Integrated Concepts
A 200-turn flat coil of wire 30.0 cm in diameter acts as an
antenna for FM radio at a frequency of 100 MHz. The
magnetic field of the incoming electromagnetic wave is
perpendicular to the coil and has a maximum strength of
1.00Ã10 â12 T . (a) What power is incident on the coil? (b)
What average emf is induced in the coil over one-fourth of a
cycle? (c) If the radio receiver has an inductance of 2.50 µH
, what capacitance must it have to resonate at 100 MHz?
46. Integrated Concepts
If electric and magnetic field strengths vary sinusoidally in
time, being zero at t = 0 , then E = E 0 sin 2Ï ft and
B = B 0 sin 2Ï ft . Let f = 1.00 GHz here. (a) When are
the field strengths first zero? (b) When do they reach their
most negative value? (c) How much time is needed for them
to complete one cycle?
47. Unreasonable Results
A researcher measures the wavelength of a 1.20-GHz
electromagnetic wave to be 0.500 m. (a) Calculate the speed
at which this wave propagates. (b) What is unreasonable
about this result? (c) Which assumptions are unreasonable or
48. Unreasonable Results
The peak magnetic field strength in a residential microwave
oven is 9.20Ã10
T . (a) What is the intensity of the
microwave? (b) What is unreasonable about this result? (c)
What is wrong about the premise?
49. Unreasonable Results
An LC circuit containing a 2.00-H inductor oscillates at such
a frequency that it radiates at a 1.00-m wavelength. (a) What
is the capacitance of the circuit? (b) What is unreasonable
about this result? (c) Which assumptions are unreasonable or
50. Unreasonable Results
An LC circuit containing a 1.00-pF capacitor oscillates at
such a frequency that it radiates at a 300-nm wavelength. (a)
What is the inductance of the circuit? (b) What is
unreasonable about this result? (c) Which assumptions are
unreasonable or inconsistent?
51. Create Your Own Problem
Consider electromagnetic fields produced by high voltage
power lines. Construct a problem in which you calculate the
intensity of this electromagnetic radiation in W/m 2 based on
the measured magnetic field strength of the radiation in a
home near the power lines. Assume these magnetic field
strengths are known to average less than a µT . The
intensity is small enough that it is difficult to imagine
mechanisms for biological damage due to it. Discuss how
much energy may be radiating from a section of power line
several hundred meters long and compare this to the power
likely to be carried by the lines. An idea of how much power
this is can be obtained by calculating the approximate current
responsible for µT fields at distances of tens of meters.
52. Create Your Own Problem |
110f16238597349a | Skip to main content
Einstein's Greatest Mistake - David Bodanis ****
I would compare Einstein's Greatest Mistake with the movie Lincoln - it is, in effect, a biopic in book form with all the glory and flaws that can bring. Compared with a good biography, a biopic will distort the truth and emphasise parts of the story that aren't significant because they make for a good screen scene. But I would much rather someone watched the movie than never found out anything about Lincoln - and similarly I'd much rather someone read this book than didn't know anything about Einstein, other than he was that crazy clever guy with the big white hair. Einstein's Greatest Mistake isn't going to impress popular science regulars, but it is likely to appeal to many readers who would never pick up a Gribbin or a Carroll. Because of this, I think we need to overcome any worries about inaccuracies and be genuinely grateful - and just as some viewers of the movie Lincoln will go on to read a good biography to find out more, so I believe that reading this book will draw some readers into the wider sphere of popular science.
What Bodanis does brilliantly is to give us a feel for Einstein as a person. I don't think I've ever read a book that does this as well, both in terms of the social life of young Einstein and what he went through in his Princeton years, which most scientific biographies don't give much time to, because he produced very little that was new and interesting. Apart from that, Einstein's Greatest Mistake is also very good when it comes to descriptions of supporting events, such as Eddington's eclipse expeditions of 1919 or the way that Hubble made sure he got himself in the limelight when Einstein visited. Whenever there's a chance for storytelling, Bodanis triumphs.
It seems almost breaking a butterfly on the wheel to say where things go wrong with science or history, a bit like those irritating people who insist on telling you what's illogical in the plot of a fun film. But I do think I need to pick out a few examples to show what I mean.
In describing Einstein's remarkable 1905 work, Bodanis portrays this as being driven by an urge to combine the nature of matter and energy, culminating in Einstein's E=mc2 paper (in reality, the closest the paper gets to this is m=L/V2). Yet this paper was pretty much an afterthought. The driver for special relativity was Maxwell's revelations about the nature of light, while the book pretty much ignores the paper for which Einstein won the Nobel Prize, one of the foundations of quantum physics.
When covering that same area, which Bodanis accurately identifies as the greatest mistake - quantum theory - the approach taken is to make Bohr, Born and Heisenberg the 'pro' faction and Einstein plus Schrödinger the 'antis'. Although this was true in terms of interpretation, the stance means that the Schrödinger equation is pretty much ignored, which gives a weirdly unbalanced picture of quantum physics. Bodanis picks on the uncertainty principle as the heart of quantum physics. Unfortunately, he then uses Heisenberg's microscope thought experiment as the definitive proof of the principle - entirely omitting that Bohr immediately tore the idea to shreds, to Heisenberg's embarrassment, pointing out that the thought experiment totally misunderstands the uncertainty principle, as it isn't produced by observation.
This isn't, then, a book for the science or history of science enthusiast. However, I stand by my assertion that this kind of biopic popular science does have an important role - I am sure the book will appeal to a wide range of people who think that science is difficult and unapproachable. And as such I heartily endorse it.
For more on David Bodanis see our interview and Twitter | Facebook | Instagram
Review by Brian Clegg
Post a Comment
Popular posts from this blog
Superior - Angela Saini *****
Where are the chemistry popular science books?
by Brian Clegg
The Art of Statistics - David Spiegelhalter *****
|
b09e21290b26d2b3 | Open main menu
In December 2018, the astrophysicist Jamie Farnes from the University of Oxford proposed a "dark fluid" theory, related, in part, to notions of gravitationally repulsive negative masses, presented earlier by Albert Einstein, that may help better understand, in a testable manner, the considerable amounts of unknown dark matter and dark energy in the cosmos.[3][4]
In general relativityEdit
Negative mass is any region of space in which for some observers the mass density is measured to be negative. This could occur due to a region of space in which the stress component of the Einstein stress–energy tensor is larger in magnitude than the mass density. All of these are violations of one or another variant of the positive energy condition of Einstein's general theory of relativity; however, the positive energy condition is not a required condition for the mathematical consistency of the theory.
Inertial versus gravitational massEdit
Ever since Newton first formulated his theory of gravity, there have been at least three conceptually distinct quantities called mass:
• inertial mass – the mass m that appears in Newton's second law of motion, F = ma
• "active" gravitational mass – the mass that produces a gravitational field that other masses respond to
• "passive" gravitational mass – the mass that responds to an external gravitational field by accelerating.
Einstein's equivalence principle postulates that inertial mass must equal passive gravitational mass. The law of conservation of momentum requires that active and passive gravitational mass be identical. All experimental evidence to date has found these are, indeed, always the same. In considering negative mass, it is important to consider which of these concepts of mass are negative. In most analyses of negative mass, it is assumed that the equivalence principle and conservation of momentum continue to apply, and therefore all three forms of mass are still the same.
In his 4th-prize essay for the 1951 Gravity Research Foundation competition, Joaquin Mazdak Luttinger considered the possibility of negative mass and how it would behave under gravitational and other forces.[5]
In 1957, following Luttinger's idea, Hermann Bondi suggested in a paper in Reviews of Modern Physics that mass might be negative as well as positive.[6] He pointed out that this does not entail a logical contradiction, as long as all three forms of mass are negative, but that the assumption of negative mass involves some counter-intuitive form of motion. For example, an object with negative inertial mass would be expected to accelerate in the opposite direction to that in which it was pushed (non-gravitationally).
There have been several other analyses of negative mass, such as the studies conducted by R. M. Price,[7] however none addressed the question of what kind of energy and momentum would be necessary to describe non-singular negative mass. Indeed, the Schwarzschild solution for negative mass parameter has a naked singularity at a fixed spatial position. The question that immediately comes up is, would it not be possible to smooth out the singularity with some kind of negative mass density. The answer is yes, but not with energy and momentum that satisfies the dominant energy condition. This is because if the energy and momentum satisfies the dominant energy condition within a spacetime that is asymptotically flat, which would be the case of smoothing out the singular negative mass Schwarzschild solution, then it must satisfy the positive energy theorem, i.e. its ADM mass must be positive, which is of course not the case.[8][9] However, it was noticed by Belletête and Paranjape that since the positive energy theorem does not apply to asymptotic de Sitter spacetime, it would actually be possible to smooth out, with energy–momentum that does satisfy the dominant energy condition, the singularity of the corresponding exact solution of negative mass Schwarzschild–de Sitter, which is the singular, exact solution of Einstein's equations with cosmological constant.[10] In a subsequent article, Mbarek and Paranjape showed that it is in fact possible to obtain the required deformation through the introduction of the energy–momentum of a perfect fluid.[11]
Runaway motionEdit
Although no particles are known to have negative mass, physicists (primarily Hermann Bondi in 1957,[6] William B. Bonnor in 1964 and 1989,[12][13] then Robert L. Forward[14]) have been able to describe some of the anticipated properties such particles may have. Assuming that all three concepts of mass are equivalent according to the equivalence principle, the gravitational interactions between masses of arbitrary sign can be explored, based on the Newtonian approximation of the Einstein field equations. The interaction laws are then:
In yellow, the "preposterous" runaway motion of positive and negative masses described by Bondi and Bonnor.
• Positive mass attracts both other positive masses and negative masses.
• Negative mass repels both other negative masses and positive masses.
For two positive masses, nothing changes and there is a gravitational pull on each other causing an attraction. Two negative masses would repel because of their negative inertial masses. For different signs however, there is a push that repels the positive mass from the negative mass, and a pull that attracts the negative mass towards the positive one at the same time.
Hence Bondi pointed out that two objects of equal and opposite mass would produce a constant acceleration of the system towards the positive-mass object,[6] an effect called "runaway motion" by Bonnor who disregarded its physical existence, stating:
Such a couple of objects would accelerate without limit (except relativistic one); however, the total mass, momentum and energy of the system would remain zero. This behavior is completely inconsistent with a common-sense approach and the expected behavior of "normal" matter. Thomas Gold even hinted that the runaway linear motion could be used in a perpetual motion machine if converted as a circular motion:
But Forward showed that the phenomenon is mathematically consistent and introduces no violation of conservation laws.[14] If the masses are equal in magnitude but opposite in sign, then the momentum of the system remains zero if they both travel together and accelerate together, no matter what their speed:
And equivalently for the kinetic energy:
However, this is perhaps not exactly valid if the energy in the gravitational field is taken into account.
Forward extended Bondi's analysis to additional cases, and showed that even if the two masses m(−) and m(+) are not the same, the conservation laws remain unbroken. This is true even when relativistic effects are considered, so long as inertial mass, not rest mass, is equal to gravitational mass.
This behaviour can produce bizarre results: for instance, a gas containing a mixture of positive and negative matter particles will have the positive matter portion increase in temperature without bound. However, the negative matter portion gains negative temperature at the same rate, again balancing out. Geoffrey A. Landis pointed out other implications of Forward's analysis,[16] including noting that although negative mass particles would repel each other gravitationally, the electrostatic force would be attractive for like charges and repulsive for opposite charges.
Forward used the properties of negative-mass matter to create the concept of diametric drive, a design for spacecraft propulsion using negative mass that requires no energy input and no reaction mass to achieve arbitrarily high acceleration.
Forward also coined a term, "nullification", to describe what happens when ordinary matter and negative matter meet: they are expected to be able to cancel out or nullify each other's existence. An interaction between equal quantities of positive mass matter (hence of positive energy E = mc2) and negative mass matter (of negative energy E = −mc2) would release no energy, but because the only configuration of such particles that has zero momentum (both particles moving with the same velocity in the same direction) does not produce a collision, all such interactions would leave a surplus of momentum, which is classically forbidden. So once this runaway phenomenon had been revealed, the scientific community considered negative mass could not exist in the universe.
Arrow of time and energy inversionEdit
In quantum mechanicsEdit
In quantum mechanics, the time reversal operator is complex, and can either be unitary or antiunitary. In quantum field theory, T has been arbitrarily chosen to be antiunitary for the purpose of avoiding the existence of negative energy states:
On the contrary, if the time reversal operator is chosen to be unitary (in conjunction with a unitary parity operator) in relativistic quantum mechanics, unitary PT-symmetry produces energy (and mass) inversion.[18]
In dynamical systems theoryEdit
In group theory of dynamical systems theory, the time reversal operator is real, and time reversal produces energy (and mass) inversion.
In 1970, Jean-Marie Souriau demonstrated, using Kirillov's orbit method and the coadjoint representation of the full dynamical Poincaré group, i.e. the group action on the dual space of its Lie algebra (or Lie coalgebra), that reversing the arrow of time is equal to reversing the energy of a particle (hence its mass, if the particle has one).[19][20]
In general relativity, the universe is described as a Riemannian manifold associated to a metric tensor solution of Einstein's field equations. In such a framework, the runaway motion prevents the existence of negative matter.[6][13]
In green, gravitational interactions in bimetric theories which differ from those elaborated by Bondi and Bonnor, solving the runaway paradox.
Some bimetric theories of the universe propose that two parallel universes instead of one may exist with an opposite arrow of time, linked together by the Big Bang and interacting only through gravitation.[21][22][23] The universe is then described as a manifold associated to two Riemannian metrics (one with positive mass matter and the other with negative mass matter). According to group theory, the matter of the conjugated metric would appear to the matter of the other metric as having opposite mass and arrow of time (though its proper time would remain positive). The coupled metrics have their own geodesics and are solutions of two coupled field equations.[24][25][26][27] The Newtonian approximation then provides the following gravitational interaction laws:
• Like masses attract (positive mass attracts positive mass, negative mass attracts negative mass).
• Unlike masses repel (positive mass and negative mass repel each other).
Those laws are different to the laws described by Bondi and Bonnor, and solve the runaway paradox. The negative matter of the coupled metric, interacting with the matter of the other metric via gravity, could be an alternative candidate for the explanation of dark matter, dark energy, cosmic inflation and accelerating universe.[24][25][26][27]
In Gauss's law of gravityEdit
In electromagnetism one can derive the energy density of a field from Gauss's law, assuming the curl of the field is 0. Performing the same calculation using Gauss's law for gravity produces a negative energy density for a gravitational field.
Gravitational interaction of antimatterEdit
The overwhelming consensus among physicists is that antimatter has positive mass and should be affected by gravity just like normal matter. Direct experiments on neutral antihydrogen have not been sensitive enough to detect any difference between the gravitational interaction of antimatter, compared to normal matter.[28]
Bubble chamber experiments provide further evidence that antiparticles have the same inertial mass as their normal counterparts. In these experiments, the chamber is subjected to a constant magnetic field that causes charged particles to travel in helical paths, the radius and direction of which correspond to the ratio of electric charge to inertial mass. Particle–antiparticle pairs are seen to travel in helices with opposite directions but identical radii, implying that the ratios differ only in sign; but this does not indicate whether it is the charge or the inertial mass that is inverted. However, particle–antiparticle pairs are observed to electrically attract one another. This behavior implies that both have positive inertial mass and opposite charges; if the reverse were true, then the particle with positive inertial mass would be repelled from its antiparticle partner.
In quantum mechanicsEdit
In 1928, Paul Dirac's theory of elementary particles, now part of the Standard Model, already included negative solutions.[29] The Standard Model is a generalization of quantum electrodynamics (QED) and negative mass is already built into the theory.
Morris, Thorne and Yurtsever[30] pointed out that the quantum mechanics of the Casimir effect can be used to produce a locally mass-negative region of space–time. In this article, and subsequent work by others, they showed that negative matter could be used to stabilize a wormhole. Cramer et al. argue that such wormholes might have been created in the early universe, stabilized by negative-mass loops of cosmic string.[31] Stephen Hawking has proved that negative energy is a necessary condition for the creation of a closed timelike curve by manipulation of gravitational fields within a finite region of space;[32] this proves, for example, that a finite Tipler cylinder cannot be used as a time machine.
Schrödinger equationEdit
For energy eigenstates of the Schrödinger equation, the wavefunction is wavelike wherever the particle's energy is greater than the local potential, and exponential-like (evanescent) wherever it is less. Naively, this would imply kinetic energy is negative in evanescent regions (to cancel the local potential). However, kinetic energy is an operator in quantum mechanics, and its expectation value is always positive, summing with the expectation value of the potential energy to yield the energy eigenvalue.
For wavefunctions of particles with zero rest mass (such as photons), this means that any evanescent portions of the wavefunction would be associated with a local negative mass–energy. However, the Schrödinger equation does not apply to massless particles; instead the Klein–Gordon equation is required.
In special relativityEdit
One can achieve a negative mass independent of negative energy. According to mass–energy equivalence, mass m is in proportion to energy E and the coefficient of proportionality is c2. Actually, m is still equivalent to E although the coefficient is another constant[33] such as c2.[34] In this case, it is unnecessary to introduce a negative energy because the mass can be negative although the energy is positive. That is to say,
Under the circumstances,
and so,
When v = 0,
where m0 < 0 is invariant mass and invariant energy equals E0 = −m0c2 > 0. The squared mass is still positive and the particle can be stable.
From the above relation,
The negative momentum is applied to explain negative refraction, the inverse Doppler effect and the reverse Cherenkov effect observed in a negative index metamaterial. The radiation pressure in the metamaterial is also negative[35] because the force is defined as F = dp/dt. Negative pressure exists in dark energy too. Using these above equations, the energy–momentum relation should be
Substituting the Planck–Einstein relation E = ħω and de Broglie's p = ħk, we obtain the following dispersion relation
when the wave consists of a stream of particles whose energy–momentum relation is (wave–particle duality) and can be excited in a negative index metamaterial. The velocity of such a particle is equal to
and range is from zero to infinity
Moreover, the kinetic energy is also negative
In fact, negative kinetic energy exists in some models[36] to describe dark energy (phantom energy) whose pressure is negative. In this way, the negative mass of exotic matter is now associated with negative momentum, negative pressure, negative kinetic energy and faster-than-light phenomena.
See alsoEdit
1. ^ "Scientists observe liquid with 'negative mass', ich turns physics completely upside down", The Independent, 21 April 2017.
2. ^ "Scientists create fluid that seems to defy physics:'Negative mass' reacts opposite to any known physical property we know", CBC, 20 April 2017
3. ^ University of Oxford (5 December 2018). "Bringing balance to the universe: New theory could explain missing 95 percent of the cosmos". EurekAlert!. Retrieved 6 December 2018.
4. ^ Farnes, J.S. (2018). "A Unifying Theory of Dark Energy and Dark Matter: Negative Masses and Matter Creation within a Modified ΛCDM Framework". Astronomy & Astrophysics. 620: A92. arXiv:1712.07962. Bibcode:2018A&A...620A..92F. doi:10.1051/0004-6361/201832898.
5. ^ Luttinger, J. M. (1951). "On "Negative" mass in the theory of gravitation" (PDF). Gravity Research Foundation. Cite journal requires |journal= (help)
6. ^ a b c d Bondi, H. (1957). "Negative Mass in General Relativity". Reviews of Modern Physics. 29 (3): 423–428. Bibcode:1957RvMP...29..423B. doi:10.1103/RevModPhys.29.423.
7. ^ Price, R. M. (1993). "Negative mass can be positively amusing" (PDF). Am. J. Phys. 61 (3): 216. Bibcode:1993AmJPh..61..216P. doi:10.1119/1.17293.
8. ^ Shoen, R.; Yao, S.-T. (1979). "On the proof of the positive mass conjecture in general relativity" (PDF). Commun. Math. Phys. 65 (1): 45–76. Bibcode:1979CMaPh..65...45S. doi:10.1007/BF01940959.
9. ^ Witten, Edward (1981). "A new proof of the positive energy theorem". Comm. Math. Phys. 80 (3): 381–402. Bibcode:1981CMaPh..80..381W. doi:10.1007/bf01208277.
10. ^ Belletête, Jonathan; Paranjape, Manu (2013). "On Negative Mass". Int. J. Mod. Phys. D. 22 (12): 1341017. arXiv:1304.1566. Bibcode:2013IJMPD..2241017B. doi:10.1142/S0218271813410174.
11. ^ Mbarek, Saoussen; Paranjape, Manu (2014). "Negative Mass Bubbles in De Sitter Spacetime". Phys. Rev. D. 90 (10): 101502. arXiv:1407.1457. Bibcode:2014PhRvD..90j1502M. doi:10.1103/PhysRevD.90.101502.
12. ^ Bonnor, W. B.; Swaminarayan, N. S. (June 1964). "An exact solution for uniformly accelerated particles in general relativity". Zeitschrift für Physik. 177 (3): 240–256. Bibcode:1964ZPhy..177..240B. doi:10.1007/BF01375497.
13. ^ a b c Bonnor, W. B. (1989). "Negative mass in general relativity". General Relativity and Gravitation. 21 (11): 1143–1157. Bibcode:1989GReGr..21.1143B. doi:10.1007/BF00763458.
14. ^ a b Forward, R. L. (1990). "Negative matter propulsion". Journal of Propulsion and Power. 6: 28–37. doi:10.2514/3.23219.
15. ^ Bondi, H.; Bergmann, P.; Gold, T.; Pirani, F. (January 1957). "Negative mass in general relativity". In M. DeWitt, Cécile; Rickles, Dean (eds.). The Role of Gravitation in Physics: Report from the 1957 Chapel Hill Conference. Open Access Epubli 2011. ISBN 978-3869319636. Retrieved 21 December 2018.
16. ^ Landis, G. (1991). "Comments on Negative Mass Propulsion". J. Propulsion and Power. 7 (2): 304. doi:10.2514/3.23327.
17. ^ Weinberg, Steven (2005). "Relativistic Quantum Mechanics: Space Inversion and Time-Reversal" (PDF). The Quantum Theory of Fields. 1: Foundations. Cambridge University Press. pp. 75–76. ISBN 9780521670531.
18. ^ Debergh, N.; Petit, J.-P.; D'Agostini, G. (November 2018). "On evidence for negative energies and masses in the Dirac equation through a unitary time-reversal operator". Journal of Physics: Communications. 2 (11): 115012. arXiv:1809.05046. Bibcode:2018JPhCo...2k5012D. doi:10.1088/2399-6528/aaedcc.
19. ^ Souriau, J.-M. (1970). Structure des Systèmes Dynamiques [Structure of Dynamic Systems] (in French). Paris: Dunod. p. 199. ISSN 0750-2435.
20. ^ Souriau, J.-M. (1997). "A mechanistic description of elementary particles: Inversions of space and time" (PDF). Structure of Dynamical Systems. Boston: Birkhäuser. pp. 173–193. doi:10.1007/978-1-4612-0281-3_14. ISBN 978-1-4612-6692-1.
22. ^ Petit, J. P. (1995). "Twin universes cosmology" (PDF). Astrophysics and Space Science. 226 (2): 273–307. Bibcode:1995Ap&SS.226..273P. CiteSeerX doi:10.1007/BF00627375.
23. ^ Barbour, Julian; Koslowski, Tim; Mercati, Flavio (2014). "Identification of a Gravitational Arrow of Time". Physical Review Letters. 113 (18): 181101. arXiv:1409.0917. Bibcode:2014PhRvL.113r1101B. doi:10.1103/PhysRevLett.113.181101. PMID 25396357.
24. ^ a b Hossenfelder, S. (15 August 2008). "A Bi-Metric Theory with Exchange Symmetry". Physical Review D. 78 (4): 044015. arXiv:0807.2838. Bibcode:2008PhRvD..78d4015H. doi:10.1103/PhysRevD.78.044015.
25. ^ a b Hossenfelder, Sabine (June 2009). Antigravitation. 17th International Conference on Supersymmetry and the Unification of Fundamental Interactions. Boston: American Institute of Physics. arXiv:0909.3456. doi:10.1063/1.3327545.
26. ^ a b Petit, J. P.; d'Agostini, G. (2014). "Negative mass hypothesis in cosmology and the nature of dark energy". Astrophysics and Space Science. 354 (2): 611. Bibcode:2014Ap&SS.354..611P. doi:10.1007/s10509-014-2106-5.
27. ^ a b Petit, J. P.; d'Agostini, G. (2014). "Cosmological bimetric model with interacting positive and negative masses and two different speeds of light, in agreement with the observed acceleration of the Universe". Modern Physics Letters A. 29 (34): 1450182. Bibcode:2014MPLA...2950182P. doi:10.1142/S021773231450182X.
29. ^ Dirac, P. A. M. (1928). "The Quantum Theory of the Electron". Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences. 117 (778): 610–624. Bibcode:1928RSPSA.117..610D. doi:10.1098/rspa.1928.0023.
30. ^ Morris, Michael S.; Thorne, Kip S.; Yurtsever, Ulvi (1988). "Wormholes, Time Machines, and the Weak Energy Condition" (PDF). Physical Review Letters. 61 (13): 1446–1449. Bibcode:1988PhRvL..61.1446M. doi:10.1103/PhysRevLett.61.1446. PMID 10038800.
31. ^ Cramer, John G.; Forward, Robert L.; Morris, Michael S.; Visser, Matt; Benford, Gregory; Landis, Geoffrey A. (1995). "Natural wormholes as gravitational lenses". Physical Review D. 51 (6): 3117. arXiv:astro-ph/9409051. Bibcode:1995PhRvD..51.3117C. doi:10.1103/PhysRevD.51.3117.
32. ^ Hawking, Stephen (2002). The Future of Spacetime. W. W. Norton. p. 96. ISBN 978-0-393-02022-9.
33. ^ Wang, Z.Y, Wang P.Y, Xu Y.R. (2011). "Crucial experiment to resolve Abraham–Minkowski Controversy". Optik. 122 (22): 1994–1996. arXiv:1103.3559. Bibcode:2011Optik.122.1994W. doi:10.1016/j.ijleo.2010.12.018.CS1 maint: multiple names: authors list (link)
External linksEdit |
902986521694cb22 | In Young's double-slit experiment, MWI states that in some "worlds" the particle goes through one slit, and in others it goes through the other. If this is so, why do we get an interference pattern? The particle is not interacting with itself and is defined by classical mechanics?
Furthermore, is MWI deterministic? If it is, how is it even possible for there to be many worlds? Surely the state of the world at a given time + the laws of physics will always result in the same future space-time in any deterministic theory?
There doesn't exist any "totally well-defined" realization of MWI but its champions want to agree with the basic experimental facts so they would almost certainly say that there is no splitting of the worlds when a particle goes through two slits.
Instead, if there's any splitting of the worlds at all, and different MWI advocates have different opinions whether it occurs at all (in particular, Everett's opinion was No – and he wrote an expletive on a document by DeWitt who introduced "many worlds" for the first time) – the splitting only occurs at the moment when many degrees of freedom decohere or a human observation is made, e.g. after the particle hits the photographic plate at one point or another. (Again, there is no well-defined description of the moment when the splitting is supposed to take place, or how it takes place etc.)
Before that, MWI assumes that the wave function "objectively exists" and its parts of the wave function going through the two slits interfere with each other just like they interfere in standard quantum mechanics.
MWI is meant to be a deterministic theory where the wave function is an object that objectively exists, a set of classical degrees of freedom, and that evolves according to a deterministic Schrödinger equation. There is no contradiction between having many worlds and determinism. However, there may be a contradiction between assumptions of MWI and other observed facts.
Most contemporary MWI advocates imagine that the "many worlds" are nothing else than the "parts" of the wave function that are widely separated in the configuration space (a power of the real space). No universal set of rules how the wave function may be divided to parts exists but when the evolution of the parts get "macroscopically different", it starts to make sense to say that "two portions of the wave function are hugely separated".
• $\begingroup$ But how does this relate to my quetsions? $\endgroup$ – Bonj May 12 '16 at 13:33
• 1
$\begingroup$ I tried to label the key words in bold face and added a paragraph to address some additional question I first overlooked. $\endgroup$ – Luboš Motl May 12 '16 at 14:04
• $\begingroup$ Okay, thanks, your answer makes more sense now. I still don't fully understand how there is no contradiction between having many worlds and determinism. Is it because of the degrees of freedom? $\endgroup$ – Bonj May 12 '16 at 14:32
• 1
$\begingroup$ By determinism, MWI advocates mean that the wave function evolves according to the deterministic Schrödinger equation and there's "nothing else". In standard QM, the measurement is when the randomness enters. In MWI, the idea is that the randomness never enters. Instead, all the possible outcomes of a measurement exist simultaneously, and you find yourself in one of them - one of the parallel Universes. The more probable "worlds" should be more likely to be "yours" than the less probable ones. MWI cannot really explain how it's possible if all the "worlds" are equally real. But they don't care $\endgroup$ – Luboš Motl May 12 '16 at 15:18
• $\begingroup$ Understood, thanks. And one last thing. If not all the "worlds" are equally real, then surely randomness or hidden variables must be introduced to explain why some worlds are more real than others? Or if our world is the only real world and we are merely navigating our way through the universal wave-function, to explain why our world is the real world given the potential that it would not have been. $\endgroup$ – Bonj May 12 '16 at 16:20
Firstly, I find the hostility against many worlds interpretation inadequate. As far as understanding quantum mechanics goes, the case is far from closed. I believe that it is unlikely, that this question will be answered near future (and it is plausible that it will be never resolved). Nevertheless, something being inherently hard should not suppress our thinking.
Secondly, why do I believe that many worlds interpretation is a sound approach? Before even hearing the MWI-word, we were talking about quantum entanglement in our department coffee room's blackboard (some 7 years ago) and I realized that measurement can be explained by quantum entanglement of the observer and the system. I think most physicists are too busy thinking about real problems (as they should), that they never sit down to think about these fundamental issues. (Also, one of the reasons is probably the choice of name and the stupid splitting movie film picture in Wikipedia).
Thirdly, I will have do define carefully what MWI is. As is correctly stated in the other answers, there are various definitions (the vicious say that there are as many definitions of MWI as there are supporters):
First thing you have to understand about MWI that it is formulated by a PhD student 60 years ago. The decoherence will not be invented in 15 years and the current main-stream interpretation of quantum mechanics is the Copenhagen interpretation with unintuitive wave function collapse postulate. Still, I find the thesis to be quite remarkable (Everett is also cited by decoherence paper, but I cannot access it so I do not know if it is in good or bad).
In Everett's thesis, he defines two ways of changing the wave function (as listed by von Neumann).
1. Process: Suddenly, by assigning the system into eigenstates with probabilities $|<\phi_i|\Psi>|^2$.
2. Process: Via unitary evolution, according to Schrodinger equation.
He lists several alternatives, but sticks with alternative 5:
As far as I see it, this is the crux of many-worlds interpretation. There is no need to consider a) unexplained sudden changes to wave function b) probabilistic rules of resetting quantum simulations at certain moments. Here a) and b) differ by whether the collapse is real in other interpretations.
Here is another quote:
We have seen that in almost all of these observer states it appears to the observer that the probabilistic aspects of the usual form of quantum theory are valid. We have thus seen how pure wave mechanics, without any initial probability assertions, can lead to these notions on a subjective level, as appearances to observers.
Here is one more:
We have shown that our theory based on pure wave mechanics, which takes as the basic description of physical systems the state function - supposed to be an objective description (i.e., in one-one, rather than statistical, correspondence to the behavior of the system) - can be put in satisfactory correspondence with experience. We saw that the probabilistic assertions of the usual interpretation of quantum mechanics can be deduced from this theory, in a manner analogous to the methods of classical statistical mechanics, as subjective appearances to observers - observers which were regarded simply as physical systems subject to the same type of description and laws as any other systems, and having no preferred position. The theory is therefore capable of supplying us with a complete conceptual model of the universe, consistent with the assumption that it contains more than one observer.
As far as I interpret this, Everett is saying that probabilistic interpretation of quantum mechanics is emergent from properties of unitary evolution of the wave function. This is appealing in many ways. First of all, wave function collapse as a phenomenon is explained. It is an entanglement between observer and the system. When added with decoherence, one has a theory of measurement which is complete and contains no awkward collapse postulates.
Finally and foremost, and this is why all the fuzz, here comes the many worlds part. Pure wave mechanics, also for observers, means that there are finite amplitudes in many different observer states simultaneously. Thus, if with a world we mean all things we can get information about, we will see, that the scientist who measured spin up will never communicate with the scientist who measured spin down. With orthodox interpretation, both of these scientists still exists in the world wave function.
Now, finally, and unfortunately at so late part of this answer because of all the fuzz, we can come to your questions:
It does not state that at all! Exactly the opposite. It states all the regular things about two-slit experiment which can also be stated with probabilistic interpretations: Only, if one measured from which slit the particle goes, one does not get the interference pattern. Probabilistic interpretations call this collapse. MWI says that there is a quantum entangled state $|a>|A>+|b>|B>$ where small letters are slits and capitals observers reading a or b from their measuring apparatus.
Is MWI deterministic?
Yes, everything will be governed by the unitary evolution of the wave function. The Schöringer equation is of first order in time and thus exactly solvable when given a boundary condition (say wave function at surface t=0). This means that the system is fully deterministic. To elaborate further, in many worlds interpretation the unitary evolution of the wave function produces collapse of wave function only as an emergent process (thus abadoning process 1, as listed above).
To conclude, MWI should be treated as a historical way to modern understanding of quantum mechanics and measurement. In it's 60's formulation it is outdated, but the concepts still hold. I find it funny, that the proponents of probabilistic theory seem to find the MWI-theory non-sense, even their own interpretation can be derived MWI. It remains to be seen if future experiments can shed light into these interpretations. For example, although very unlikely, it would be very exciting if the progress in quantum computing would hit unexplained mysterious limitations requiring new theories. As far as the dislike towards this theory goes, it is probably related to fact that discussing interpretations is mostly hobby to everyone. There are very few who actually does research on this fields, and could comment on the latest events. The rest are probably like me: when I go to work tomorrow, I will wonder about theoretical modelling of plasmonics in photovoltaics, since that is what I get paid to do.
As stated by Luboš Motl in another answer, there is no consensus between Everett's interpretation contenders about what it means exactly. The common idea is that no state evolution other than unitary as per Schrödinger should be accepted (no collapse) but that's about it.
Indeed it is not clear at all what is supposed to be splitting or branching, and when. For a thorough review of this topic ("many what, exactly"?), see Multiplicity in Everett's interpretation of quantum mechanics (Louis Marchildon, 2015).
The Many Worlds Interpretation is a theory of non relativistic quantum mechanics where there is a wavefunction from the configuration space of the entire system (and this is utterly essential) into the joint spin state of the entire system and it evolves according to the Schrödinger equation, and nothing else.
No one claims (or has ever claimed) that in one world the particle goes through one slit and in another world the particle goes through the other. Instead, what happens is the wavefunction of the system evolves according to the Schrödinger equation, hence, according to the Hamiltonian.
If you want to see why interference happens, it helps to first contrast with a situation where there is no interference, e.g. one where you get which-way information.
We will have the same setup for both situations. So for instance, the x axis could represent the x position of particle one and the y axis could represent the $y$ position of particle one and the z axis could represent the position of particle two. Then the wavefunction must assign a value to each combination of the locations of each particle (the configuration space is $\mathbb R^{3n}$ and a single point $ (\vec r_1, \vec r_2, ... , \vec r_n)$ tells you a classical configuration of every particle).
But your wavefunction assigns values to every possible configuration. Possibly zero. Possibly nonzero. Let's have the wavefunction have its complex phase oscillate in the x direction, but be confined to a finite spread in the x direction (confined near $x=-1$). The oscillation of phase in the x direction means the region of support (where it has nonzero values) will move in the positive x direction. Let's also focus it in the z direction so it has a finite spread in the z direction (confined near $z=0$). But in the y direction it will be bimodal. It will have a region near $y=-10$ where it is nonzero, and a region near $y=+10$ where it is nonzero. But go a little bit from those values and it drops to zero.
So it's like you had a packet moving in the x direction and focused in the x direction and focused in the z direction and also focused in the y direction near $y=-10$ and then you had a second packet moving in the x direction and focused in the x direction and focused in the z direction and also focused in the y direction near $y=+10$. Your initial wave is the sum of those packets, so it is nonzero in both those regions of configuration space.
But those regions aren't worlds.
Now, if you go through a slit with a which way detector, then the wave confined near $y=-10$ goes through a slit with center at $x=0$ and $y=-10$ but because of the which way detector, the wave is deflected in the direction $-\hat z$ so even as it spreads out on the $y$ direction it is systematically deflected in the $-\hat z$ direction.
So it eventually hits a screen at $x=200$ all concentrated at $z=-200$. It's like if you put a fiber optic cable on the slit and aimed the beam down.
And the wave confined near $y=+10$ goes through a slit with center at $x=0$ and $y=+10$ but because of the which way detector, the wave is deflected in the direction $+\hat z$ so even as it spreads out on the $y$ direction it is systematically deflected in the $\hat z$ direction.
So it eventually hits a screen at $x=200$ all concentrated at $z=+200$. It's like if you put a fiber optic cable on the slit and aimed the beam up.
If you actually put fiber optic cables in your slits and sent classical light through it, you'd get a beam deflected down from one slit and a beam deflected up from the other slit, and you would get single slit dictation pattern from each slit.
Both classically and quantum mechanically.
Now let's say there is no which way pattern. Then you wave simply spreads, but isn't deflected. Which means the wave near $y=-10$ spreads out and the wave near $y=+10$ spreads out and by the time they get to $x=+200$ the support of each wave overlaps the support of the other wave.
So it really was one wave with two disjoint regions of support, and the two regions evolved to overlap. When that happens the values interfere and the wave develops parts that have larger values than others.
So far, this is just what the Schrödinger equations says. No interpretations have come into play at all.
The wavefunction gets regions with different sized values solely based on whether those disjoint regions of support evolve to overlap. And they do start to overlap when the location of the particle moving towards the screen isn't causing (by the Hamiltonian) any other particles to move differently.
But when you hit the screen, other particles do start to move differently. The particles at that screen location change. And the particles at the other screen locations do not change.
So when you had the which way detector, you effectively had a screen right there and the waves immediately start to veer away from each other in configuration space. Whereas if you don't have the which way detector they spread and start to overlap before they hit there screen (where they then start to veer away from each other).
Your claim that the particle follows classical mechanics is plain wrong. And MWI doesn't claim that.
In MWI you do have worlds. But many worlds are defined as separate wavepacket whose supports will never overlap with each other in the future. This basically requires that they separately veer in different directions. And since there are $3n$ directions in $\mathbb R^{3n}$ it is easy to veer in different directions once the wave has made many twists in many different particles' directions. And just like two people randomly moving in $3n$ directions. When $n>>10^{24}$ the odds are really bad that they will ever bump into each other.
So the cutoff of being separate worlds isn't sharp, any more than the size cutoff to use thermodynamics isn't sharp. But eventually when enough particles have been involved, the different twists have placed the support into such different regions that they are not going to overlap again.
So they were not separate worlds when they went through the slit, since separate worlds are defined by not having their support overlap ever again.
The MWI is completely deterministic. There are many worlds because it waves can have regions of support (places where they currently are nonzero) that diverge away from each other in a huge dimensioned space and never overlap again and thus act independently.
There isn't a "classical state of the world." The classical states of the world are literally the points in the huge dimensioned configuration space. And the wavefunction is assigning nonzero vaules to whole regions of configuration space. A world in MWI is a current assignment of nonzero values to a region that evolves in the future as if that is the only place where the values are currently nonzero. You can have multiple worlds. And by definition each acts as if it is the only one.
If you know the definitions it isn't mysterious at all. Start with configuration space. Then assign values to each configuration. Then note that the region where the values are nonzero (the support) sometimes splits into disjoint regions that evolve over time to never again overlap.
Then note that the values in those disjoint regions can act like they are the whole wavefunction and can't tell if they are. Hence it makes sense to call them a world and let each one model itself as the whole world.
If you didn't allow that, you'd be insisting they have to continue modeling parts of the configuration space that don't affect them. For no scientific reason whatsoever. And it wouldn't be wrong per se. It's just extra bookkeeping that doesn't affect that world's predictions. Insisting on modeling things that don't affect your predictions is the domain of people with strong opinions.
People that merely care about making predictions accept that there is a point where it is safe (prediction wise) to simplify things down to have a given world select itself as the only one. Since it won't matter to its predictions about its own future evolution.
• 1
$\begingroup$ After reading this I know even less about this than I thought I knew before. It's like a bunch of abused words that can be found in the layman literature about quantum mechanics hacked together into a lengthy rant of sorts that doesn't tell me anything about what the MWI really is (I already know that it's nonsense, but that takes but one sentence to motivate). $\endgroup$ – CuriousOne May 12 '16 at 21:49
• 1
$\begingroup$ @CuriousOne: this is not kinder garden here. Please remain polite while interacting with others. I am also sure your contribution could be much more valuable if you did not waste your time in unjustified (and not very informative) rants about view points you don't accept or don't understand. Regarding Timaeus' answer, this is basically the same as Lubos Motl's one, phrased differently and with an attempt to explain how determinism does not necessarily rule out effective branching. $\endgroup$ – gatsu May 16 '16 at 9:51
• $\begingroup$ @gatsu: That was the polite way of saying it. Timaeus can do better, in my opinion, and I hope he will. I really tried reading it and I got confused, just like I said. I have absolutely nothing against somebody writing a good answer, but this isn't it, I am afraid. $\endgroup$ – CuriousOne May 16 '16 at 9:57
In the many world interpretation of quantum mechanics, there is only a split into different branches when decoherence takes place. So there would be no split in the two slit experiment until the photon hits the photographic plate. At that stage decoherence takes place and there would be a split into a separate Universe (branch) each of which has the photon hit a different part of the photographic plate. The theory is deterministic until you try to answer what is the probability of being in the different Universes. This is then taken as being proportional to the amplitude squared of the value of the wave function for each Universe.
There is no widely accepted way of experimentally distinguishing the many worlds interpretation from the more traditional Copenhagen interpretation. So the question which is more correct is more of a philosophical one rather than a physics one.
protected by Qmechanic May 15 '16 at 17:39
Would you like to answer one of these unanswered questions instead?
|
c49f054511db3e97 | Quantum Physics Equation Examples
Dec 05, 2016 · How Madhyamika Philosophy Explains the Mystery of Quantum Physics Abstract: The theory of relativity informs us that our science is a science of our experience, and not a science of a universe that is independent of us as conscious observers (see the explanation in this article: Why Relativity Exists). This nature of our science is also reflected in the formulation of quantum mechanics…
Quantum Physics by James G. Branson. An important aspect of the design of this note is to maintain a concise basic treatment of the physics, with derivations and examples available behind hyperlinks.
Summaries of Spacetime, Relativity, and Quantum Physics. Click here for an overview of gravity written before the 2016 definitive discovery.
Schrodinger's Equation Theoretical Physics, Quantum Physics, Holographic. (τ ) calculator – step by step calculation, formula & solved example problem to find.
Here’s a nice surprise: quantum physics is less complicated than we thought. it’s impossible to know certain pairs of things about a quantum particle at once. For example, the more precisely you.
Apr 03, 2016 · Physics, Astrophysics, Cosmology. This proposal is an effort to investigate unification in Physics. Quantum Space Elements Theory of Physics – quantumspaceelements.com
Jul 25, 2016. Much of the philosophical literature connected with quantum theory centers on the. For example, for a system consisting of n point particles, the state of the. The equation of motion obeyed by a quantum state vector is the.
“We live in a quantum world,” says quantum physicist Sir Peter Knight. “We don’t often exploit it, but there are examples where we do. which “hinges on a really weird property of quantum physics in.
In Quantum Mechanics: The Physics of the Microscopic World, you will learn logical tools to grasp the paradoxes and astonishing insights of this field.
Thermodynamics 2nd Law Eli5 This question can be answered, but only with the addition of some extra assumptions. What we know for sure is that the entropy of the universe as a whole cannot decrease, and hence fluids can’t be unmixed without increasing the entropy of some other system by an amount equal to $Delta S_text{mixing}$. Second Law of
Now that we have discovered a "new" theory (quantum mechanics as exemplified by Schrödinger's equation) we. This equation gives us the wave function ψ for the electron in the hydrogen atom. For example, in the Bohr atom, the electron.
COLLEGE OF ARTS & SCIENCES PHYSICS Detailed course offerings (Time Schedule) are available for. Spring Quarter 2019; Summer Quarter 2019; PHYS 101 Physical Science By Inquiry I (5-) NW, QSR View course details in MyPlan: PHYS 101. PHYS 104 Facilitated Group Inquiry I (2) NW Laboratory-based development of concepts and reasoning skills. Develops problem-solving techniques and.
A mathematical problem underlying fundamental questions in particle and quantum physics is provably unsolvable. also predicts some new and very weird physics that hasn’t been seen before. For.
There’s a new equation floating around the world of physics these days that would make Einstein. But that’s just the simplest example. Susskind points out that quantum fields — the stuff that.
“Before our work, quantum-level properties of electrons in chiral crystals were rarely studied,” said M. Zahid Hasan, the Eugene Higgins Professor of Physics at Princeton. points governed by the.
Oct 5, 2012. 3 Quantum Mechanics—Some Preliminaries. 15. 3.3 Simple Examples of Time Independent Schrödinger Equation….. 19.
The particle in a 1-dimensional well is the most simple example. direction, the time-independent Schrödinger equation can be written.
Electromagnetic equations are used in marine sciences Physics Formulas, He also explained the photoelectric effect, which is the emission of electrons from.
Like Ron Meyers, for example. Ronald E. Meyers is a physicist at the. but the warfighter as well. That’s where quantum physics came into the equation (so to speak). “My main goal right now is to.
For one thing, science is full of examples of equations devised for one phenomenon turning. the fruiting patterns of trees in pistachio orchards. But doesn’t quantum physics involve a rather.
Mar 14, 2017. Two scientists were surprised to find pi lurking in a quantum mechanics formula for the energy states of the hydrogen atom. Happy Pi Day!
This is an example. physics played no part in cracking the genetic code, nor is it necessary to understand how it functions. The great virtue of this book is its thesis – it sets out a clear and.
Loop quantum gravity is. is infinite and all known laws of physics break down including Einstein’s theory. Theoretical physicists have been questioning if singularities really exist through complex.
Schrödinger Equation and Quantum Numbers. Potential energy for the hydrogen atom: The time-independent Schrödinger equation in three dimensions is then:.
An example of this. matter described by quantum mechanics has a spectral gap, or not, cannot exist. Which limits the extent to which we can predict the behavior of quantum materials, and.
The God Delusion By Richard Dawkins Pdf Download Freethought Multimedia Index – Richard Dawkins – James Randi – Michael Shermer – Steven Pinker – Daniel Dennett – Phil Plait – Massimo Pigliucci. This page is here to share any kind of multimedia files featuring Richard Dawkins. There are some direct streaming media links and some to download. Jul 4, 2007. Richard Dawkins' The
In this guidebook we try to understand the weird physics that come into play in th.. Quantum Superposition And Entanglement Explained. also behave as waves and even gave a mathematical equation for the wavelength of such a wave.
The amplituhedron, or a similar geometric object, could help by removing two deeply rooted principles of physics: locality and unitarity. “Both are hard-wired in. to the 9-page formula. In other.
Among many important and fundamental issues in science, solving the Schroedinger equation (SE) of atoms and molecules is one of the ultimate goals in chemistry, physics. "Quantum Dilemma" for the.
Feb 07, 2018 · The Many Worlds Interpretation of quantum mechanics holds that there are an infinite number of parallel Universes that exist, holding all possible outcomes of a quantum.
Sep 17, 2017. Welcome to the world of quantum mechanics and be ready to be. equation which helped him to win the Nobel Prize in Physics in 1933.
Astronomy What To See Tonight This astronomy calendar of celestial events contains dates for notable celestial events including moon phases, meteor showers, eclipses, oppositions, conjunctions, and other interesting events.Most of the astronomical events on this calendar can be seen with unaided eye, although some may require a good pair of binoculars for best viewing. “We talk a lot about what
The memory chip that stores the time in your clock couldn’t have been devised without a key equation in quantum. example. Given Euclid’s basic geometric assumptions, Pythagoras’s theorem is true.
The difficulty comes from the fact that the electrical properties of materials are governed by the laws of quantum physics, which contain equations that are extremely. in the next five to ten years.
The Schrödinger equation is a linear partial differential equation that describes the wave function or state function of a quantum-mechanical system.: 1–2 It is a key result in quantum mechanics, and its discovery was a significant landmark in the development of the subject.The equation is named after Erwin Schrödinger, who derived the equation in 1925, and published it in 1926, forming.
Professor Carl Hagen was teaching his students how to use the variational method and used the hydrogen atom as an example in the. connection between physics and math. I find it fascinating that a.
In March 1905 , Einstein created the quantum theory of light, the idea that light. This equation predicted an evolution of energy roughly a million times more. For example, in 1910, Einstein answered a basic question: 'Why is the sky blue?'
A new approach to combining Einstein’s General Theory of Relativity with quantum physics could come out of a paper published. From relativity they took the equation E=MC 2, which holds that when.
"We are often taught (see, for example, the classic book by Leonard. from the nonlinear classical wave equation to the linear Schrödinger equation—that is, from classical to quantum physics—the.
Schrodinger wave equation is the core foundation of modern quantum. quantum mechanics for dummies, this page is about quantum mechanics for dummies. One example is the differntial law where the law is necessary to be written in.
Usmle Step 2 Epidemiology About the USMLE Step 3 Exam. Step 3 is the third written board exam you will take during Medical school that is sponsored by both the Federation of State Medical Boards and the National Board of Medical Examiners. Entomologist Santa Clarita Ca Ancestry of George W. Bush 1 George Walker Bush, b.New Haven, Conn., 6
Aug 13, 2013. 23 Responses to Quantum Mechanics Explained. with some initial wave function, and evolving it according to the Schrodinger equation.
The energy "uncertainty" introduced in quantum theory combines with the. the simple, one-particle Schrödinger equation into a relativistic quantum wave equation. for example, a conjugate gradient method for computing the inverse of the.
Notes on Quantum Mechanics with Examples of Solved Problems. This book explains the following topics: Schrodinger equation, Wronskian theorem, Hilbert Spaces for Physicists, Postulates of Quantum Mechanics, Harmonic Oscillator in Operatorial Form, Angular momentum quantization, Symmetries in Quantum Mechanics, Spin, Identical particles, Hydrogen atom, Time-dependent and independent. |
7f90b1f3b4143ae7 | Tagged: Max Tegmark Toggle Comment Threads | Keyboard Shortcuts
• richardmitnick 7:46 am on June 15, 2017 Permalink | Reply
Tags: , , Max Tegmark, , When Neurology Becomes Theology, Wilder Penfield
From Nautilus: “When Neurology Becomes Theology”
June 15, 2017
Robert A. Burton
A neurologist’s perspective on research into consciousness.
Early in my neurology residency, a 50-year-old woman insisted on being hospitalized for protection from the FBI spying on her via the TV set in her bedroom. The woman’s physical examination, lab tests, EEGs, scans, and formal neuropsychological testing revealed nothing unusual. Other than being visibly terrified of the TV monitor in the ward solarium, she had no other psychiatric symptoms or past psychiatric history. Neither did anyone else in her family, though she had no recollection of her mother, who had died when the patient was only 2.
The psychiatry consultant favored the early childhood loss of her mother as a potential cause of a mid-life major depressive reaction. The attending neurologist was suspicious of an as yet undetectable degenerative brain disease, though he couldn’t be more specific. We residents were equally divided between the two possibilities.
Fortunately an intern, a super-sleuth more interested in data than speculation, was able to locate her parents’ death certificates. The patient’s mother had died in a state hospital of Huntington’s disease—a genetic degenerative brain disease. (At that time such illnesses were often kept secret from the rest of the family.) Case solved. The patient was a textbook example of psychotic behavior preceding the cognitive decline and movement disorders characteristic of Huntington’s disease.
WHERE’S THE MIND?: Wilder Penfield spent decades studying how brains produce the experience of consciousness, but concluded “There is no good evidence, in spite of new methods, that the brain alone can carry out the work that the mind does.” Montreal Neurological Institute
Over my career, I’ve gathered a neurologist’s working knowledge of the physiology of sensations. I realize neuroscientists have identified neural correlates for emotional responses. Yet I remain ignorant of what sensations and responses are at the level of experience. I know the brain creates a sense of self, but that tells me little about the nature of the sensation of “I-ness.” If the self is a brain-generated construct, I’m still left wondering who or what is experiencing the illusion of being me. Similarly, if the feeling of agency is an illusion, as some philosophers of mind insist, that doesn’t help me understand the essence of my experience of willfully typing this sentence.
Slowly, and with much resistance, it’s dawned on me that the pursuit of the nature of consciousness, no matter how cleverly couched in scientific language, is more like metaphysics and theology. It is driven by the same urges that made us dream up gods and demons, souls and afterlife. The human urge to understand ourselves is eternal, and how we frame our musings always depends upon prevailing cultural mythology. In a scientific era, we should expect philosophical and theological ruminations to be couched in the language of physical processes. We argue by inference and analogy, dragging explanations from other areas of science such as quantum physics, complexity, information theory, and math into a subjective domain. Theories of consciousness are how we wish to see ourselves in the world, and how we wish the world might be.
My first hint of the interaction between religious feelings and theories of consciousness came from Montreal Neurological Institute neurosurgeon Wilder Penfield’s 1975 book, Mystery of the Mind: A Critical Study of Consciousness and the Human Brain. One of the great men of modern neuroscience, Penfield spent several decades stimulating the brains of conscious, non-anesthetized patients and noting their descriptions of the resulting mental states, including long-lost bits of memory, dreamy states, deju vu, feelings of strangeness, and otherworldliness. What was most startling about Penfield’s work was his demonstration that sensations that normally qualify how we feel about our thoughts can occur in the absence of any conscious thought. For example, he could elicit feelings of familiarity and strangeness without the patient thinking of anything to which the feeling might apply. His ability to spontaneously evoke pure mental states was proof positive that these states arise from basic brain mechanisms.
And yet, here’s Penfield’s conclusion to his end-of-career magnum opus on the nature of the mind: “There is no good evidence, in spite of new methods, that the brain alone can carry out the work that the mind does.” How is this possible? How could a man who had single-handedly elicited so much of the fabric of subjective states of mind decide that there was something to the mind beyond what the brain did?
In the last paragraph of his book, Penfield explains, “In ordinary conversation, the ‘mind’ and ‘the spirit of man’ are taken to be the same. I was brought up in a Christian family and I have always believed, since I first considered the matter … that there is a grand design in which all conscious individuals play a role … Since a final conclusion … is not likely to come before the youngest reader of this book dies, it behooves each one of us to adopt for himself a personal assumption (belief, religion), and a way of life without waiting for a final word from science on the nature of man’s mind.”
Front and center is Penfield’s observation that, in ordinary conversation, the mind is synonymous with the spirit of man. Further, he admits that, in the absence of scientific evidence, all opinions about the mind are in the realm of belief and religion. If Penfield is even partially correct, we shouldn’t be surprised that any theory of the “what” of consciousness would be either intentionally or subliminally infused with one’s metaphysics and religious beliefs.
To see how this might work, take a page from Penfield’s brain stimulation studies where he demonstrates that the mental sensations of consciousness can occur independently from any thought that they seem to qualify. For instance, conceptualize thought as a mental calculation and a visceral sense of the calculation. If you add 3 + 3, you compute 6, and simultaneously have the feeling that 6 is the correct answer. Thoughts feel right, wrong, strange, beautiful, wondrous, reasonable, far-fetched, brilliant, or stupid. Collectively these widely disparate mental sensations constitute much of the contents of consciousness. But we have no control over the mental sensations that color our thoughts. No one can will a sense of understanding or the joy of an a-ha! moment. We don’t tell ourselves to make an idea feel appealing; it just is. Yet these sensations determine the direction of our thoughts. If a thought feels irrelevant, we ignore it. If it feels promising, we pursue it. Our lines of reasoning are predicated upon how thoughts feel.
No image caption or credit.
Shortly after reading Penfield’s book, I had the good fortune to spend a weekend with theoretical physicist David Bohm. Bohm took a great deal of time arguing for a deeper and interconnected hidden reality (his theory of implicate order). Though I had difficulty following his quantum theory-based explanations, I vividly remember him advising me that the present-day scientific approach of studying parts rather than the whole could never lead to any final answers about the nature of consciousness. According to him, all is inseparable and no part can be examined in isolation.
In an interview in which he was asked to justify his unorthodox view of scientific method, Bohm responded, “My own interest in science is not entirely separate from what is behind an interest in religion or in philosophy—that is to understand the whole of the universe, the whole of matter, and how we originate.” If we were reading Bohm’s argument as a literary text, we would factor in his Jewish upbringing, his tragic mistreatment during the McCarthy era, the lack of general acceptance of his idiosyncratic take on quantum physics, his bouts of depression, and the close relationship between his scientific and religious interests.
Many of today’s myriad explanations for how consciousness arises are compelling. But once we enter the arena of the nature of consciousness, there are no outright winners.
Christof Koch, the chief scientific officer of the Allen Institute for Brain Science in Seattle, explains that a “system is conscious if there’s a certain type of complexity. And we live in a universe where certain systems have consciousness. It’s inherent in the design of the universe.”
According to Daniel Dennett, professor of philosophy at Tufts University and author of Consciousness Explained and many other books on science and philosophy, consciousness is nothing more than a “user-illusion” arising out of underlying brain mechanisms. He argues that believing consciousness plays a major role in our thoughts and actions is the biological equivalent of being duped into believing that the icons of a smartphone app are doing the work of the underlying computer programs represented by the icons. He feels no need to postulate any additional physical component to explain the intrinsic qualities of our subjective experience.
Meanwhile, Max Tegmark, a theoretical physicist at the Massachusetts Institute of Technology, tells us consciousness “is how information feels when it is being processed in certain very complex ways.” He writes that “external reality is completely described by mathematics. If everything is mathematical, then, in principle, everything is understandable.” Rudolph E. Tanzi, a professor of neurology at Harvard University, admits, “To me the primal basis of existence is awareness and everything including ourselves and our brains are products of awareness.” He adds, “As a responsible scientist, one hypothesis which should be tested is that memory is stored outside the brain in a sea of consciousness.”
Each argument, taken in isolation, seems logical, internally consistent, yet is at odds with the others. For me, the thread that connects these disparate viewpoints isn’t logic and evidence, but their overall intent. Belief without evidence is Richard Dawkins’ idea of faith. “Faith is belief in spite of, even perhaps because of, the lack of evidence.” These arguments are best read as differing expressions of personal faith.
For his part, Dennett is an outspoken atheist and fervent critic of the excesses of religion. “I have absolutely no doubt that secular and scientific vision is right and deserves to be endorsed by everybody, and as we have seen over the last few thousand years, superstitious and religious doctrines will just have to give way.” As the basic premise of atheism is to deny that for which there is no objective evidence, he is forced to avoid directly considering the nature of purely subjective phenomena. Instead he settles on describing the contents of consciousness as illusions, resulting in the circularity of using the definition of mental states (illusions) to describe the general nature of these states.
The problem compounds itself. Dennett is fond of pointing out (correctly) that there is no physical manifestation of “I,” no ghost in the machine or little homunculus that witnesses and experiences the goings on in the brain. If so, we’re still faced with asking what/who, if anything, is experiencing consciousness? All roads lead back to the hard problem of consciousness.
Though tacitly agreeing with those who contend that we don’t yet understand the nature of consciousness, Dennett argues that we are making progress. “We haven’t yet succeeded in fully conceiving how meaning could exist in a material world … or how consciousness works, but we’ve made progress: The questions we’re posing and addressing now are better than the questions of yesteryear. We’re hot on the trail of the answers.”
By contrast, Koch is upfront in correlating his religious upbringing with his life-long pursuit of the nature of consciousness. Raised as a Catholic, he describes being torn between two contradictory views of the world—the Sunday view reflected by his family and church, and the weekday view as reflected in his work as a scientist (the sacred and the profane).
In an interview with Nautilus, Koch said, “For reasons I don’t understand and don’t comprehend, I find myself in a universe that had to become conscious, reflecting upon itself.” He added, “The God I now believe in is closer to the God of Spinoza than it is to Michelangelo’s paintings or the God of the Old Testament, a god that resides in this mystical notion of all-nothingness.” Koch admitted, “I’m not a mystic. I’m a scientist, but this is a feeling I have.” In short, Koch exemplifies a truth seldom admitted—that mental states such as a mystical feeling shape how one thinks about and goes about studying the universe, including mental states such as consciousness.
Both Dennett and Koch have spent a lifetime considering the problem of consciousness; though contradictory, each point of view has a separate appeal. And I appreciate much of Dennett and Koch’s explorations in the same way that I can mull over Aquinas and Spinoza without necessarily agreeing with them. One can enjoy the pursuit without believing in or expecting answers. After all these years without any personal progress, I remain moved by the essential nature of the quest, even if it translates into Sisyphus endlessly pushing his rock up the hill.
The spectacular advances of modern science have generated a mindset that makes potential limits to scientific inquiry intuitively difficult to grasp. Again and again we are given examples of seemingly insurmountable problems that yield to previously unimaginable answers. Just as some physicists believe we will one day have a Theory of Everything, many cognitive scientists believe that consciousness, like any physical property, can be unraveled. Overlooked in this optimism is the ultimate barrier: The nature of consciousness is in the mind of the beholder, not in the eye of the observer.
It is likely that science will tell us how consciousness occurs. But that’s it. Although the what of consciousness is beyond direct inquiry, the urge to explain will persist. It is who we are and what we do.
See the full article here .
Please help promote STEM in your local schools.
STEM Icon
Stem Education Coalition
• richardmitnick 12:39 pm on March 2, 2017 Permalink | Reply
Tags: , Hugh Everett III, Max Tegmark,
From Nautilus: “Evil Triumphs in These Multiverses, and God Is Powerless”
March 2, 2017
Dean Zimmerman
Mathematical Multiverse: According to Tegmark, for every possible way in which mathematical models dictate that matter can be consistently arranged to fill a spacetime universe, there exists such a universe. platonicsolids.info
A religious Everettian might hope that God would just prune the tree, and leave only those branches where good triumphs over evil. But as the philosopher Jason Turner of the University of Arizona has pointed out, such pruning undermines the Schrödinger equation. If God prevents the worst universes from emerging on the world-tree, then the deterministic law would not truly describe the evolution of the multiverse. Not all the superposed states that it predicts would actually occur, but only those that God judges to be “good enough.”
Even if the pruning argument doesn’t work, there is another reason to think that the many-worlds interpretation doesn’t pose a serious threat to belief in God. Everett’s multiverse is just a much expanded physical world like this one, and finding we were in it would be like finding we were in a world with many more inhabited planets, some the amplified versions of the worst parts of our planet and others the amplified versions of the best parts. And so, even the worst parts of an Everettian multiverse are just particularly ugly versions of planet Earth. If an afterlife helps to explain our seemingly pointless suffering, then it would help explain the seemingly pointless suffering in even the worst of these Everett worlds, if we suppose that everyone in every branch, shows up in an afterlife.
A theist may also take comfort in the fact that the many-worlds interpretation is still far from scientific orthodoxy. Although beloved by Oxford philosophers and accepted by a growing number of theoretical physicists, the theory remains highly controversial, and there are fundamental problems still being hashed out by the experts.
The Everettian multiverse contains worlds that are hard to reconcile with a good God, but Tegmark’s multiverse might contain the worst. His theory, from his 2014 book Our Mathematical Universe, isn’t anchored in quantum mechanics but in modal realism, the doctrine proposed by philosopher David Lewis that every possible way that things could have gone—every consistent, total history of a universe—is as real as our own universe.
Most philosophers talk about possible worlds as abstract things, like numbers, located outside of space and time, and as if they are very different from the actual world, which is substantial and made out of good old-fashioned matter. Tegmark agrees that other merely possible universes are abstract like numbers. But he denies that this makes them less real than the physical world. He thinks our universe is itself fundamentally a mathematical structure. Every physicist agrees that there is a set of mathematical entities standing in relations that perfectly models the distribution of fields and particles which a perfect physics would ascribe to our world. But Tegmark argues that our universe is identical to those mathematical things.
If the world we inhabit is a purely mathematical structure, then all the other possible worlds we can imagine are equally real, their existence a necessary result of slightly different mathematical structures. For every possible way in which mathematical models dictate that matter can be consistently arranged to fill a spacetime universe, there exists such a universe.
These possible arrangements of matter are bound to include ones corresponding to miserable universes full of pointless suffering—universes like all of the worst branches in the Everettian world-tree, and infinitely many more just as bad. But there would also be worlds that are worse. Unlike Everett’s worlds that are generated by a physical theory, Tegmark’s worlds are generated by mere possibility, which he identifies with mathematical consistency.
Budding Universes: Everett’s many worlds interpretation holds that there are multiple overlapping universes, all branching off from some initial state in a great world-tree. Jacopo Werther
According to Tegmark, every possible story about living creatures that can be told by means of a mathematical model of the underlying physical facts is a true story. This means that even if some of Tegmark’s universes last long enough to include episodes in which their inhabitants have an afterlife, the existence of mathematical structures with every possible shape and size requires shorter worlds, too. And, infinitely many of these worlds will not last long enough for their inhabitants to enjoy an afterlife.
There is one way, then, in which Everett’s multiverse poses less of a challenge to the theist than Tegmark’s. Everett’s theory doesn’t predict that God won’t do anything for people with short, miserable lives, and it doesn’t predict that God won’t somehow compensate them in an afterlife. Rather, it only predicts that there will be many more short, miserable lives than just the ones in our universe; whereas Tegmark’s theory implies that there have to be worlds in which there are short miserable lives and no afterlife.
Adding insult to injury, since the horrifying worlds are consequences of pure mathematics, they exist as a matter of absolute necessity—so there is nothing God can do about it! The resulting picture will remain offensive to pious ears: A God who loved all creatures, but was forced to watch infinitely many of them endure lives of inconsolable suffering, would be a God embroiled in a tragedy.
But there is still hope for the theist.
Unlike the Everettian many worlds, which issue from experimental theories in physics and so are harder to dismiss, Tegmark’s theory is based on frail philosophical arguments. Take, for example, his claim that the physical universe is a purely mathematical structure: Why should we accept this? Ordinarily, physicists use mathematical structures as models for how the physical world might work, but they do not identify the mathematical model with the world itself. Tegmark’s reason for taking the latter approach is his conviction that physics must be purged of anything but mathematical terms. Non-mathematical concepts, he says, are “anthropocentric baggage,” and must be eliminated for objectivity’s sake. But why think that the only objective descriptions that can truly apply to things as they are in themselves are mathematical descriptions? So far as I can see, he never justifies this assumption. And such a counterintuitive starting point isn’t enough to threaten one’s belief in a benevolent God.
Apart from the threats posed by the awful worlds within the multiverses of Everett and Tegmark, the idea that we inhabit a multiverse doesn’t have to undermine a belief in God. Every theist should take seriously the possibility that there might exist more universes, simply on the grounds that God would have reason to create more good stuff. Indeed, an infinitely ingenious, resourceful, and creative Being might be expected to work on canvases the size of worlds—some filled with frenetic activity, others more like vast minimalist paintings, many maybe even featuring intelligent beings like ourselves. And the theories of physicists such as Alan Guth and Andrei Linde—whose multiverse is an eternally inflating field that spins off baby universes—or Paul Steinhardt and Neil Turok—whose multiverse amounts to an endless cyclical universe punctuated by big bangs and big crunches—are arguably compatible with this theological vision.
It may turn out that our world is fairly middling, one among the many universes that were good enough for God to create. And the idea of a multiverse consisting of disconnected spacetime universes may make it easier to believe that our world—our universe—is a part of a larger one that is on balance very good and created by a perfectly benevolent deity.
Dean Zimmerman is a professor of philosophy at Rutgers University. Follow him on Twitter @deanwallyz.
Rutgers smaller
See the full article here .
Please help promote STEM in your local schools.
STEM Icon
Stem Education Coalition
Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: |
f45c4b9d75bb8f96 | Random Matrices, Fall 2017
Introductory course on random matrices from October 10 to November 23, 2017 at IST Austria. The course instructor is László Erdős and the teaching assistant is Dominik Schröder.
Random matrices were first introduced in statistics in the 1920’s, but they were made famous by Eugene Wigner’s revolutionary vision. He predicted that spectral lines of heavy nuclei can be modelled by the eigenvalues of random symmetric matrices with independent entries (Wigner matrices). In particular, he conjectured that the statistics of energy gaps is given by a universal distribution that is independent of the detailed physical parameters. While the proof of this conjecture for realistic physical models is still beyond reach, it has recently been shown that the gap statistics of Wigner matrices is independent of the distribution of the matrix elements. Students will be introduced to the fascinating world of random matrices and presented with some of the basic tools for their mathematical analysis in this course.
Target audience
Students with orientation in mathematics, theoretical physics, statistics and computer science. No physics background is necessary. Calculus, linear algebra and some basic familiarity with probability theory is expected.
The final grade will be obtained as a combination of the student’s performance on the example sheets and an oral exam.
Lecture notes
Related notes from the recent PCMI summer school on random matrices.
The course lasts from October 10 – November 23, 2017.
Day Time Room
Oct 12 Thu 11.20am–12.35pm Lecture Mondi 3
Oct 17 Tue 10.15am-11.30am Lecture Mondi 3
Oct 17 Tue 11.45am-12.35pm Recitation Mondi 3
Oct 18 Wed 11.30am-12.45pm Lecture Mondi 1
Oct 24 Tue 10.15am-11.30am Lecture Mondi 3
Oct 24 Tue 11.45am-12.35pm Recitation Mondi 3
Oct 25 Wed 11.30am-12.45pm Lecture Mondi 1
Nov 2 Thu 11.20am–12.35pm Lecture Mondi 3
Nov 7 Tue 10.15am-11.30am Lecture Mondi 3
Nov 7 Tue 11.45am-12.35pm Recitation Mondi 3
Nov 9 Thu 11.20am–12.35pm Lecture Mondi 3
Nov 14 Tue 10.15am-11.30am Lecture Mondi 3
Nov 14 Tue 11.45am-12.35pm Recitation Mondi 3
Nov 16 Thu 11.20am–12.35pm Lecture Mondi 3
Nov 21 Tue 10.15am-11.30am Lecture Mondi 3
Nov 21 Tue 11.45am-12.35pm Recitation Mondi 3
Nov 23 Thu 11.20am–12.35pm Lecture Mondi 3
October 12
1. Basic facts from probability theory. Law of large numbers (LLN) and the central limit theorem (CLT), viewed as universality statements. In the LLN the limit is deterministic, while in CLT the limit is a random variable, namely the Gaussian (normal) one. No matter which distribution the initial random variables had, their appropriately normalized sums always converge to the same distribution — in other words the limiting Gaussian distribution is universal.
2. Wigner random matrices. Real symmetric and complex hermitian. GUE and GOE. Wishart matrices and their relation to Wigner-type matrices. Scaling so that eigenvalues remain bounded. Statement on the concentration of the largest eigenvalue. Introducing the semicircle law as a law of large numbers for the empirical density of the eigenvalues.
3. Linear statistics of eigenvalues (with a smooth function as observable) leads to CLT but with an unusual scaling — indicating very strong correlation among eigenvalues.
4. Statement of the gap universality, Wigner surmise. The limit behavior of the gap is a new universal distribution; in this sense this is the analogue of the CLT.
Reading. PCMI lecture notes up to middle of Section 1.2.3.
October 17
1. Main questions in random matrix theory:
• Density on global scale (like LLN)
• Extreme eigenvalues (especially relevant for sample covariance matrices)
• Fluctuation on the scale of eigenvalue spacing (like CLT)
• Mesoscopic density — follows the global behaviour, but it is a non-trivial fact.
• Eigenfunction (de)localization
2. Definition of $k$-point correlation functions. Relation of the gap distribution to the local correlation functions on scale of the eigenvalue spacing (inclusion-exclusion formula)
3. Rescaled (local) correlation functions. Determinant structure. Sine kernel for complex Hermitian Wigner matrices. Statement of the main universality result in the bulk spectrum (for energies away from the edges of the semicircle law).
Reading. PCMI lecture notes up to the end of Section 1.2.3.
October 17 (Recitation)
1. Definition of Stieltjes transform \begin{equation}m_\mu(z):=\int_{\mathbb R} \frac{d\mu(\lambda)}{\lambda-z},\quad z\in\mathbb{H}:=\{z\in\mathbb C,\, \Im z>0\}\notag\end{equation} of probability measure $\mu$ and statement of elementary properties (analyticity, trivial bounds on derivatives). Interpretation of the Stieltjes transform of the empirical spectral density as the normalized trace of the resolvent.
2. Interpretation of imaginary part as the convolution with the Poisson kernel, \begin{equation}\Im m_\mu(x+i\eta)= \pi (P_\eta\ast \mu)(x),\quad P_\eta(x):=\frac{1}{\pi}\frac{\eta}{x^2+\eta^2}.\notag\end{equation} The Stieltjes transform $m_\mu(x+i\eta)$ thus contains information about $\mu$ at a scale of $\eta$ around $x$.
3. Stieltjes continuity theorem for sequences of random measures: A sequence of random probability measures $\mu_1,\mu_2,\dots$ converges vaguely, a) in expectation b) in probability c) almost surely to a deterministic probability measure $\mu$ if and only if for all $z\in\mathbb H$, the sequence of numbers $m_{\mu_N}(z)$ converges a) in expectation b) in probability c) almost surely to $m_{\mu}(z)$.
4. Derivation of the Helffer-Sjöstrand formula \begin{equation}f(\lambda)=\frac{1}{2\pi i}\int_{\mathbb C} \frac{\partial_{\overline z} f_{\mathbb C} (z)}{\lambda-z}d \overline z \wedge d z,\quad f_{\mathbb C}(x+i\eta):= \chi(\eta)\big[f(x)+i\eta f’(x)\big] \notag\end{equation} for compactly supported $C^2$-functions $f\colon\mathbb R\to\mathbb R$ and some smooth cut-off function $\chi$.
October 18
Main motivations for random matrices:
1. Wigner’s original motivation: to model energy levels of heavy nuclei. The distribution of the gaps very well matched that of the Wigner random matrices. The density of states depends on the actual nucleus (and it is not the semicircle), but the local statistics (e.g. gap statistics) are universal.
2. Random Schrodinger operators, Anderson transition
3. Gap statistics of the zeros of the Riemann zeta function.
Quantum Mechanics in nutshell:
• Configuration space: $S$ (with a measure)
• State space: $\ell^2(S)$ (square integrable functions on $S$)
• Observables: self-adjoint (symmetric) operators on $\ell^2(S)$
• A distinguished observable: the Hamilton (or energy) operator
• Time evolution — Schrödinger equation.
Random Schrödinger operator describes a single electron in an ionic (metallic) lattice. $S = \mathbb Z^d$ or a subset of that. $H$ is the sum of the discrete (lattice) Laplace operator and a random potential.
Anderson phase transition: depending on the strength of the disorder, the system is either in delocalized (conductor) or localized (insulator) phase. Localized phase is characterized by
• Localized eigenfunctions
• Localized time evolution (no transport)
• Pure point spectrum (for the infinite volume operator)
• Poisson local spectral statistics, no level repulsion (for the finite volume model)
In the delocalized phase, we have delocalized eigenfunctions (“almost” $\ell^2$-normalizable solutions to the eigenvalue equation), quantum transport, absolutely continuous spectrum and random matrix eigenvalue statistics, in particular level repulsion.
Reading. PCMI lecture notes Sections 5.1 — 5.3
October 24
Phase diagram for the Anderson model (= random Schrödinger operator on the $\mathbb Z^d$ lattice) in $d\ge 3$ dimensions. Localized regime can be proven, delocalized regime is conjectured to exist but no mathematical result.
In $d=1$ dimension the Anderson model is always localized (transfer matrix method helps). In $d=2$ nothing is known, even there is no clear agreement in the physics whether it behaves more like $d=1$ (localization) or more like $d=3$ (delocalization); majority believes in localization.
Delocalized regime, at least for small disorder, sounds easier to prove because it looks like a perturbative problem (zero disorder corresponds to the pure Laplacian which is perfectly understood). Resolvent perturbation formulas were discussed; major problem: lack of convergence.
We gave some explanation why the localization regime is easier to handle mathematically: off-diagonal resolvent matrix elements decay exponentially. This fact provides an effective decoupling and makes localized resolvents almost independent.
Random band matrices: naturally interpolate between $d=1$ dimensional random Schrödinger operators (bandwidth $W=O(1)$) and mean field Wigner matrices (bandwidth $W = N$, where $N$ is the linear size of the system). Phase transition is expected at $W = \sqrt{N}$; this is a major open question. There are similar conjectures in higher dimensional band matrices, but we did not discuss them.
Finally, we discussed a mysterious connection between the Dyson sine kernel statistics and the location of the zeros of of the zeta function on the critical line. There is only one mathematical result in this direction, Montgomery proved that the two point function of the (appropriately rescaled) zeros follows the sine kernel behavior, but only for test functions with Fourier support in $[-1,1]$. No progress has been made in the last 40 years to relax this condition.
Reading. PCMI lecture notes Section 5.3 and the entertaining article “Tea Time in Princeton” by Paul Bourgade about Montgomery’s theorem.
October 24 (Recitation)
1. Analytic definition of (multivariate) cumulants $\kappa_\alpha$ of a random vector $X=(X_1,\dots,X_n)$ as the coefficients of the log-characteristic function \begin{equation}\log \mathbf E e^{i t\cdot X} = \sum_\alpha \kappa_\alpha \frac{(it)^\alpha}{\alpha!}.\notag\end{equation}
2. Proof of the cumulant expansion formula \begin{equation}\mathbf E X_i f(X)=\sum_{\alpha} \frac{\kappa_{\alpha, i }}{\alpha!}\mathbf E f^{(\alpha)}(X)\notag\end{equation} via Fourier transform.
3. Expression of moments in terms of cumulants as the sum of all partitions \begin{equation}\mathbf{E} X_1\dots X_n=\sum_{\mathcal{P}\vdash [n]} \kappa^{\mathcal{P}}=\sum_{\mathcal{P}\vdash [n]}\prod_{P_i\in\mathcal{P}} \kappa( X_j \mid j\in P_i )\notag.\end{equation}
4. Derivation of the inverse relationship \begin{equation}\label{comb:cum}\kappa(X_1,\dots,X_n)=\sum_{\mathcal P\vdash [n]}(-1)^{\lvert\mathcal P\rvert-1}(\lvert\mathcal P\rvert-1)! \prod_{P_i\in\mathcal P} \mathbf E \prod_{j\in P_i} X_j\end{equation} through Möbius inversion on abstract incidence algebras. Note that \eqref{comb:cum} can also serve as a purely combinatorial definition of cumulants.
5. Proof that cumulants of random variables which split into two independent subgroups vanish.
October 25
There are two natural ways to put a measure on the space of (hermitian) matrices, hence defining two major classes of random matrix ensembles:
1. Choose matrix elements independently (modulo the hermitian symmetry) from some distribution on the complex or real numbers. This results in Wigner matrices (and possible generalizations, when identicality of the distribution is dropped).
2. Equip the space of hermitian matrices with the usual Lebesgue measure and multiply it by a Radon-Nikodym factor that makes the measure finite. We choose the factor invariant under unitary conjugation in the form $\exp(-\text{Tr}\, V(H))$ for some real valued function $V$. These are called invariant ensembles.
Only Gaussian matrices belong to both families.
For invariant ensembles, the joint probability density function of the eigenvalues can be computed explicitly and it consists of the Vandermonde determinant (to the first or second power, $\beta=1,2$, depending on the symmetry class). We sketched of the proof by change of variables.
Invariant ensembles can also be represented as Gibbs measure of N points on the real line with a one-body potential $V$ and a logarithmic two-body interaction. This interpretation allows for choosing any $\beta>0$, yielding the beta-ensembles, even though there is no matrix or eigenvalues behind them. There are analogous universality statements for beta-ensembles, which assert that the local statistics depend only on the parameter beta and are independent of the potential $V$.
Reading. PCMI lecture notes Section 1.1.2
November 2
Precise statement of the Wigner semicircle law (for i.i.d. case) in the form of weak convergence in probability. In general, there are two methods to prove the semicircle law:
1. Moment method: computes $\text{Tr}\, H^k$, obtains the distribution of the moments of the eigenvalues. The moments are given by the Catalan numbers and they uniquely identify the semicircle law (calculus exercise) using Carleman theorem on the uniqueness of the measure if the moments do not grow too fast.
2. Resolvent method: derives an equation for the limiting Stieltjes transform of the empirical density.
The resolvent method in general is more powerful, it works well inside as well as neat the edge of the spectrum. The moment method is powerful only at the extreme edges.
Proof of the Wigner semicircle law by moment method: Compute \begin{equation}\frac{1}{N} \mathbb E \text{Tr}\, H^k=\frac{1}{N}\mathbb E\sum_{i_1,\dots,i_k} h_{i_1i_2}h_{i_2i_3}\dots h_{i_{k-1}i_k}h_{i_ki_1}\notag\end{equation} in terms of the number of backtracking paths (only those path give a relevant contribution where every edge is travelled exactly twice and the skeleton of the graph is a tree). We reduced the problem to counting such path — it is an $N$ independent problem.
November 7
We completed the proof of the Wigner semicircle law by moment method. Last time we showed that to evaluate $\mathbb E \text{Tr}\, H^{2k}$ is sufficient to count the number of backtracking path of total length $2k$. This number has many other combinatorial interpretations. It is the same as the number of rooted, oriented trees on $k+1$ vertices by a simple one to one correspondance. It is also the same as the number of Dyck paths of length $2k$, where a Dyck path is a random walk on the nonnegative integers starting and ending at $0$. Finally, we counted the Dyck paths by deriving the recursion \begin{equation}C_k = C_{k-1} C_0 + C_{k-2} C_1 + … + C_0 C_{k-1}\notag\end{equation} with $C_0=1$ for their number $C_k$. This recursion can be solved by considering the generating function \begin{equation}f(x) = \sum_{k=0}^\infty C_k x^k\notag\end{equation} and observe that \begin{equation}xf^2(x) = f(x) - 1.\notag\end{equation} Thus $f(x)$ can be explicitly computed by the solution formula for the quadratic equation and Taylor expanding around $x=0$. After some calculation with the fractional binomial coefficients, we obtain that $C_k = 1/(k+1) {2k \choose k}$, i.e. the Catalan numbers.
Since the Catalan numbers are the moments of the semicircle law (calculus exercise), and these moments do not grow too fast, they identify the measure.
This proved that the expectation of the empirical eigenvalue density converges to the semicircle in the sense of moments. Using compact support of the measures (for the empirical density we know it from the homework problem since the norm of $H$ is bounded), by Weierstrass theorem we can extend the convergence for any bounded continuous functions.
Finally, the expectation can be removed, by computing the variance of $N^{-1} \text{Tr}\, H^k$, again by the graphical representation (now we have two cycles and studied which edge-coincidences give rise to nonzero contribution). We showed that the variance vanishes in the large $N$ limit and then a Chebyshev inequality converts it into a high probability bound.
Reading. PCMI lecture notes Section 2.3
November 7 (Recitation)
We found yet another combinatorial description of Catalan numbers. $C_k$ is the number of non-crossing pair partitions of the set ${1,\dots,2k}$. Indeed, denote the number in question by $N_k$. Then there exists some $j$ such that $1$ is paired with $2j$ since due to the absence of crossings there has to be an even number of other integers between $1$ and its partner. The number of non-crossing pairings of the integers ${2,\dots,2j-1}$ and ${2j+1,\dots,2k}$ are given by $N_{j-1}$ and $N_{k-j}$ respectively and it follows that \begin{equation}N_{k}=\sum_{j=1}^k N_{j-1}N_{k-j}, \qquad N_1=1\notag\end{equation} and thus $N_k=C_k$ since they satisfy the same recursion.
We defined a commonly used notion of stochastic domination $X\prec Y$ and stated the following large deviation estimates for families of random variables $X_i,Y_i$ of zero mean $\mathbf E X_i=\mathbf E Y_i=0$ and unit variance $\mathbf E \lvert X_i\rvert^2=\mathbf E \lvert Y_i\rvert^2=1$ and deterministic coefficients $b_i$, $a_{ij}$, \begin{equation}\left\lvert\sum_{i} b_i X_i\right\rvert\prec \left(\sum_i\lvert b_i\rvert^2\right)^{1/2}\notag\end{equation} \begin{equation}\left\lvert\sum_{i,j} a_{ij} X_i Y_j\right\rvert\prec \left(\sum_{i,j}\lvert a_{ij}\rvert^2\right)^{1/2}\label{LDE}\end{equation} \begin{equation}\left\lvert\sum_{i\not=j} a_{ij} X_i X_j\right\rvert\prec \left(\sum_{i\not=j}\lvert a_{ij}\rvert^2\right)^{1/2}\notag\end{equation} We proved \eqref{LDE} only for uniformly subgaussian families of random variables but not that uniformly finite moments of all orders are also sufficient for them to hold.
November 9
• Precise statement of the local semicircle laws (entrywise, isotropic, averaged) for Wigner type matrices with moment condition of arbitrary order.
• Definition of stochastic dominations, some properties.
• We started the proof of the weak law for Wigner matrices.
• Schur complement formula. Almost selfconsistent equation for $m_N = N^{-1} \text{Tr}\, G$ assuming that the fluctuation of the quadratic term is small (will be proven later).
• The other two errors were shown to be small. The smallness of the single diagonal element $h_{ii}$ directly follows from the moment condition. The difference of the Stieltjes transform of the resolvent and its minor was estimated via interlacing and integration by parts.
Reading. PCMI lecture notes Section 3.1.1.
November 14
• Proof of the weak local law in the bulk.
• Stability of the equation for $m_{sc}$, the Stieltjes transform of the semicircle law.
• Proof for the large eta regime.
• Breaking the circularity of the argument in two steps: In the first step one proves a weaker bound that allows one to approximate $m$ via $m_{sc}$, then run the argument again but with improved inputs. The bootstrap argument will have the same philosophy next time.
• Discussion of the uniformity in the spectral parameter. Grid argument to improve the bound for supremum over all $z$. This argument works because (i) the probabilistic bound for any fixed $z$ is very strong (arbitrary $1/N$-power) and (ii) the function we consider $(m-m_{sc})(z)$ has some weak but deterministic Lipschitz continuity.
November 14 (Recitation)
We presented a cumulant approach to proving local laws for correlated random matrices. Specifically, we gave a heuristic argument that the resolvent $G$ should be well apprixmated by the unique solution $M=M(z)$ to the matrix Dyson equation (MDE) \begin{equation}0=1+zM+\mathcal S[M]M, \quad \Im M>0,\qquad \mathcal S[R]:= \sum_{\alpha,\beta}\text{Cov}(h_\alpha,h_\beta) \Delta^\alpha R\Delta^\beta.\notag\end{equation} We furthermore proved that the error matrix \begin{equation}D=1+zG+\mathcal S[G]G=HG+\mathcal S[G]G\notag\end{equation} satisfies \begin{equation}\mathbf E\lvert\langle x,Dy \rangle\rvert^2 \lesssim \left(\frac{\lVert x\rVert \lVert y\rVert}{\sqrt{N\eta}}\right)^2,\qquad \mathbf E\lvert\langle BD \rangle\rvert^2 \lesssim \left(\frac{\lVert B\rVert}{N\eta}\right)^2\notag\end{equation} in the case of Gaussian entries $h_\alpha$.
November 16
• We completed the rigorous proof of the weak local semicircle law by the bootstrap argument.
• Then we mentioned two improvements: (i) Strong local law (error bound improved from $(N \eta)^{-1/2}$ to $(N\eta)^{-1}$ and (ii) entrywise local law.
• Proof of the entrywise local law via the self-consistent vector equation. Stability operator mentioned in the more general setup of Wigner type matrices (when the variance matrix $S$ is stochastic). Diagonal and offdiagonal elements are estimated separately via a joint control parameter $\Lambda$. Main ideas are sketched, the rigorous bootstrap argument was omitted.
Reading. PCMI lecture notes Sections 4.1–4.3
November 21
• Fluctuation averaging phenomenon. Proof of the strong local law in the bulk. Some remarks on the modifications at the edge. Corollaries of the strong local law: optimal estimates on the eigenvalue counting function and rigidity (location of the individual eigenvalues).
• Bulk universality for Hermitian Wigner matrices. Basic idea: interpolation. Ornstein Uhlenbeck process for matrices (preserves expectation and variance). Crash course on Brownian motion, stochastic integration and Ito’s formula. Dyson Brownian motion (DBM) for the eigenvalues. Local equilibration phenomenon due to the strong level repulsion in the DBM.
November 23
We summarized the three step strategy to prove local spectral universality of Wigner matrices. We discussed the second step: fast convergence to equilibrium of the Dyson Brownian Motion. Relation between SDE and PDE: introduction of the generator. Laplacian is the generator of the standard Brownian motion.
Basics of large dimensional analysis: Gibbs measure, entropy, Dirichlet form, generator. The total mass of a probability measure is preserved under the dynamics. Relation between various concepts of closeness to equilibrium. Entropy inequality (total variation norm is bounded by the entropy). Logarithmic Sobolev inequality. Spectral gap inequality. Bakry-Emery theory: (i) the Gibbs measure with a convex Hamiltonian satisfies LSI, (ii) entropy and Dirichlet form decays exponentially fast.
The problem sheets can either be handed in during the lecture or put in the letter box of Dominik Schröder in LBW, 3rd floor.
published due
Problem sheet I Solutions Oct 18 Oct 25
Problem sheet II Solutions Oct 25 Nov 7
Problem sheet III Solutions Nov 9 Nov 21 |
2fcf90ff9971020a | Edit: This question is similar and possibly presents the question in a more approachable way, and this answer has given me a more real-space way of considering the movement of electrons.
I'm looking for an intuitive way to think about how electrons in a conduction band (or in an unfilled valence band) make a material a conductor, whereas filled bands result in an insulator.
What I think I understand:
A metal is a lattice of atoms with a 'sea' of delocalised electrons. If we consider these electrons as particles, an electric current can be thought of as the drift of these electrons in a particular direction when a potential difference is applied.
Energy bands arise due to the mixing of atoms' orbitals into molecular orbitals. The periodicity of the lattice results in band gaps forming. That is, certain electron energies aren't allowed.
The highest occupied electron energy band is the valence band. If this band is full (all $k$ states are full), the material is an insulator as an applied electric field has no effect on the states of the electrons (bottom row on figure) - there's nowhere for electrons to go as all states are taken. However, if the valence band is partially filled, only a small amount of energy is required to shift some electrons into higher energy states (top row on figure). Thus the material is a conductor.
What I'm confused about:
What's the link between electrons being able to populate new $k$ states, and the macroscopic property of conductivity, or a flowing current? After some reading of similar questions, I've come to the rough idea that if an electron can easily access empty $k$ states, it can "hop" around the crystal lattice relatively easily as it has empty spots to hop into. So different $k$ states correspond to different spatial locations? This is obviously treating the electron as a particle rather than a wave.
Is it more "right" in this situation to consider electrons as waves, thus filled bands result in standing waves in the electronic wavefunctions, and travelling waves for conductors? (I'm very unclear on this take of things.)
Pic http://users-phys.au.dk/philip/pictures/solid_metalquantum/blochconduction.gif
• $\begingroup$ Maybe idea of a Fermi sphere would be of some help? $\endgroup$ – Žarko Tomičić Jan 19 '17 at 13:01
• $\begingroup$ Electric field puts electrons in new states if there are any states to populate. $\endgroup$ – Žarko Tomičić Jan 19 '17 at 13:02
• $\begingroup$ In any case you have to use quantum mechanics do describe conductivity. New states mean different kinetic energy for electrons. Means different momentum. Etc. So if you put electrons in new states Fermi sphere shifts in some definite direction. Difference betwen fixed Fermi sphere and one that is shifted gives conductivity. $\endgroup$ – Žarko Tomičić Jan 19 '17 at 13:10
• $\begingroup$ And, different k states correspond to different kinetic energies. $\endgroup$ – Žarko Tomičić Jan 19 '17 at 13:10
• $\begingroup$ Maybe the question is badly phrased. Why does a shift in the Fermi sphere result in conductivity? Is there a physical way to think about it? $\endgroup$ – nancy Jan 19 '17 at 14:02
The first point to note is that in a periodic solid, you are essentially representing a solid with an infinite number of atoms, so you have a continuum of states, i.e. bands. The second point is that in a solid, the bands will be filled up to the top of the conduction band, each containing two electrons. In an electrical insulator, the electrons can be considered to be confined to their states, unless you give them enough energy (e.g. via photons or phonons) to cross the band gap from the valence band to the conduction band. In a metal, however, there is no band gap, so electrons can occupy conduction states without giving them a kick; instead, they occupy states up to the Fermi energy or Fermi level.
Fermi energy
In a metal, the electrons can no longer considered to be localised. The $k$ in the $k$-states refers to a wave vector, a solution of the Schrödinger equation. In an insulator, these wave vectors are localised, akin to a Gaussian function, but in a metal, they are more plane wave-like and are delocalised. The fact that these states are wave-like, and therefore delocalised across the notionally infinite solid is what allows the flow of current.
• 1
$\begingroup$ But from what I've read and understood, the wavefunction of electrons in (periodic) insulators is still plane wave-like and delocalised across the entire solid, it's just that these states are stationary so there's no transfer of energy. $\endgroup$ – nancy Jan 19 '17 at 13:37
• $\begingroup$ You're right, you can mathematically expand the wavefunction of an electron in an insulator in terms of plane waves, but they can still be localised --- there will be an energy penalty involved if you try to move it to a delocalised state in the conduction band. By the same token, you can expand a delocalised wavefunction in terms of localised functions such as Gaussians. In a material without a band gap, there is no such penalty. $\endgroup$ – Paraquat Jan 19 '17 at 13:50
• 1
$\begingroup$ Thanks for the clarification! But I don't understand the difference between electrons in the valence band and the conduction band. Is saying the wavefunction is delocalised correspond to a classical particle that can "move" through the materal? I feel I'm missing something obvious or I'm thinking about this in completely the wrong way. $\endgroup$ – nancy Jan 19 '17 at 14:01
• $\begingroup$ I think you may be confusing real space and reciprocal ($k$-) space. The natural way of solving the Schrodinger equation is in reciprocal space with wavefunctions expanded as plane waves, which gives you a $k$-vector that encodes the frequency and polarisation of the wave-like electron. If you were to transform your k-space solution to real space, you would see an electron that is delocalised across the entire solid, but unfortunately this tends to be difficult to visualise. $\endgroup$ – Paraquat Jan 19 '17 at 14:12
• 1
$\begingroup$ If I may rephrase my question, how can one intuitively picture what an electrical current is, in terms of the delocalised electron wavefunction? Or alternatively, what's the distinction between an electron in the valence band and one in the conduction band? (Why does an electron in the conduction band mean the material is now a conductor?) $\endgroup$ – nancy Jan 19 '17 at 14:22
Your Answer
|
80bae99e56eadb2d |
Share this |
Latest Posts
Currently, NASA is asking for public assistance for their astrobiology program, or they were up until the current government shutdown, and in particular, asking for suggestions as to where their program should be going. I think this is an extremely enlightened view, and I hope they receive plenty of good suggestions and take some of them up. This is a little different from the average way science gets funded, in which academic scientists put in applications for funds to pursue what they think is original. This is supposed to permit the uncovering of "great new advances", and in some areas, perhaps it does, but I rather suspect the most common outcome is to support what Rutherford dismissively called, "stamp collecting". You get a lot of publications, a lot of data, but there is no coherent approach towards answering "big questions". That, I think, is a strength of the NASA approach, and I hope other organizations take this up. For example, if we wish to address climate change, what questions do we really want to have answered? What we tend to get is, "Fund me to set up more data gathering," from those too uninspired to come up with something more incisive. We do not need more data to set the parameters so that current models better represent what we see; we need better models that will represent what will happen if we do or do not do X.
So what are the good questions for NASA to address? Obviously there are a very large number of them, but in my view, regarding biogenesis, I think there are some very important ones. Perhaps one of the most important one that has been pursued so far is how do the planets get their water, because if we want life on other planets, they have to have water. The water on the rocky planets is often thought to come from chondrites, as a "late veneer" on the planet. Now, one of the peculiarities of this explanation is that, as I argued in my ebook, Planetary Formation and Biogenesis, this explanation has serious problems. The first is, only a special class of chondrites contains volatiles; the bulk of the bodies from the asteroid belt do not. Further, the isotopes of the heavier elements are different from Earth, the ratios of different volatiles do not correspond to anything we see here or on the other planets, so why is such an explanation persisted with? The short answer is, for most there is no alternative.
My alternative is simple: the planets started accreting through chemical processes. Only solids could be accreted in reasonable amounts this close to the star, unless the body got big enough to hold gravitationally gases from the accretion disk. Water can be held as metal and silicon hydroxyl compound, the water subsequently being liberated. This, as far as I know, is the only mechanism by which the various planets can have different atmospheric compositions: different amounts of the various components were formed at different temperatures in the disk.
If that is correct, we would have a means of predicting whether alien planets could conceivably contain life. Accordingly, one way to pursue this would be to try to understand the high temperature chemistry of the dusts and volatiles expected to be in the accretion disk. That would involve a lot of work for which chemists alone would be suitable. Now, my question is, how many chemists have shown any interest in this NASA program? Do we always want to complain about insufficient research funds, or are we prepared to go out and do something to collect more?
Posted by Ian Miller on Oct 7, 2013 1:10 AM BST
Perhaps one of the more interesting questions is where did Earth's volatiles come from? The generally accepted theory is that Earth formed by the catastrophic collisions of planetary embryos (Mars-sized bodies), which effectively turned earth into a giant ball of magma, at which time the iron settled to the core though having a greater density, and took various siderophile elements with it. At this stage, the Earth would have been reasonably anhydrous. Subsequently, Earth got bombarded with chondritic material from the asteroid belt that was dislodged by Jupiter's gravitational field (including, in some models, Jupiter migrating inwards then out again), and it is from here that Earth gets its volatiles and its siderophile elements. This bombardment is often called "the late veneer". In my opinion, there are several reasons why this did not happen, which is where these papers become relevant. What are the reasons? First, while there was obviously a bombardment, to get the volatiles through that, only carbonaceous chondrites will suffice, and if there were sufficient mass to give that to Earth, there should also be a huge mass of silicates from the more normal bodies. There is also the problem of atmospheric composition. While Mars is the closest, it is hit relatively infrequently compared with its cross-section, and hit by moderately wet bodies almost totally deficient in nitrogen. Earth is hit by a large number of bodies with everything, but the Moon is seemingly not hit by wet bodies or carbonaceous bodies. Venus, meanwhile, is hit by more bodies that are very rich in nitrogen, but relatively dry. What does the sorting?
The first paper (Nature 501: 208 – 210) notes that if we assume the standard model by which core segregation took place, the iron would have removed about 97% of the Earth's sulphur and transferred it to the core. If so, the Earth's mantle should exhibit fractionated 34S/32S ratio according to the relevant metal-silicate partition coefficients, together with fractionated siderophile metal abundances. However, it is usually thought that Earth's mantle is both homogeneous and chondritic for this sulphur ratio, consistent with the acquisition of sulphur ( and other siderophile elements) from chondrites (the late veneer). An analysis of mantle material from mid-ocean ridge basalts displayed heterogeneous 34S/32S ratios that are compatible with binary mixing between a low 34S/32S ambient mantle ratio and a high 34S/32S recycled component. The depleted end-member cannot reach a chondritic value, even if the most optimistic surface sulphur is added. Accordingly, these results imply that the mantle sulphur is at least partially determined by original accretion, and not all sulphur was deposited by the late veneer.
In the second (Geochim. Cosmochim. Acta 121: 67-83), samples from Earth, Moon, Mars, eucrites, carbonaceous chondrites and ordinary chondrites show variation in Si isotopes. Earth and Moon show the heaviest isotopes, and have the same composition, while enstatite chondrites have the lightest. A model of Si partitioning based on continuous planetary formation that takes into account T, P and oxygen fugacity variation during Earth's accretion. If the isotopic difference results solely from Si fractionation during core formation, their model requires at least ~12% by weight Si in the core, which exceeds estimates based on core density or geochemical mass balance calculations. This suggests one of two explanations: Earth's material started with heavier silicon, or (2) there is a further unknown process that leads to fractionation. They suggest vaporization following the Moon forming event, but would not this lead to lighter or different Moon material?
One paper (Earth Planet. Sci. Lett. 2013: 88-97) pleased me. My interpretation of the data related to atmospheric formation is that the gaseous elements originally accreted as solids, and were liberated by water as the planet evolved. These authors showed that early early degassing of H2 obtained from reactions of water explains the "high oxygen fugacity" of the Earth's mantle. A loss of only 1/3 of an "ocean" of water from Earth would shift the oxidation state of the upper mantle from the very low oxidation state equivalent to the Moon, and if so, no further processes are required. Hydrogen is an important component of basalts at high pressure and, perforce, low oxygen fugacity. Of particular interest, this process may have been rapid. On the early Earth, over 5 times the amount of heat had to be lost as is lost now, and one proposal (501:501 - 504 ) heat pipe volcanism such as found on Io would manage this, in which case, the evolution of water and volatiles may have also been very rapid.
Finally, in (Icarus 226: 1489 -1498), near-infrared spectra show the presence of hydrated poorly crystalline silica with a high silica content on the western rim of Hellas. The surfaces are sporadically exposed over a 650 km section within a limited elevation range. The high abundances and lack of associated aqueous phase material indicate high water to rock ratios were present, but the higher temperatures that would lead to quartz were not present. This latter point is of interest because it is often considered that the water flows on Mars in craters were due to internal heating due to impact, such heat being retained for considerable periods of time. To weather basalt to make silica, there would have to be continuous water of a long time, and if the water was hot and on the surface it would rapidly evaporate, while if it was buried, it would stay super-heated, and presumably some quartz would result. This suggests extensive flows of cold water.
Posted by Ian Miller on Sep 30, 2013 3:30 AM BST
In a previous post, I questioned whether gold showed relativistic effects in its valence electrons. I also mentioned a paper of mine that proposes that the wave functions of the heavier elements do not correspond exactly to the excited states of hydrogen, but rather are composite functions, some of which have reduced numbers of nodes, and I said that I would provide a figure from the paper once I sorted out the permission issue. That is now sorted, and the following figure comes from my paper.
The full paper can be found at and I thank CSIRO for the permission to republish the figure. The lines show the theoretical function, the numbers in brackets are explained in the paper and the squares show the "screening constant" required to get the observed energies. The horizontal axis shows the number of radial nodes, the vertical axis, the "screening constant".
The contents of that paper are incompatible with what we use in quantum chemistry because the wave functions do not correspond to the excited states of hydrogen. The theoretical function is obtained by assuming a composite wave in which the quantal system is subdivisible provided discrete quanta of action are associated with any component. The periodic time may involve four "revolutions" to generate the quantum (which is why you see quantum numbers with the quarter quantum). What you may note is that for = 1, gold is not particularly impressive (and there was a shortage of clear data) but for = 0 and = 2 the agreement is not too bad at all, and not particularly worse than that for copper.
So, what does this mean? At the time, the relationships were simply put there as propositions, and I did not try to explain their origin. There were two reasons for this. The first was that I thought it better to simply provide the observations and not clutter it up with theory that many would find unacceptable. It is not desirable to make too many uncomfortable points in one paper. I did not even mention "composite waves" clearly. Why not? Because I felt that was against the state vector formalism, and I did not wish to have arguments on that. (That view may not be correct, because you can have "Schrödinger cat states", e.g. as described by Haroche, 2013, Angew. Chem. Int. Ed. 52: 10159 -10178). However, the second reason was perhaps more important. I was developing my own interpretation of quantum mechanics, and I was not there yet.
Anyway, I have got about as far as I think is necessary to start thinking about trying to convince others, and yes, it is an alternative. For the motion of a single particle I agree the Schrödinger equation applies (but for ensembles, while a wave equation applies, it is a variation as seen in the graph above.) I also agree the wave function is of the form
ψ = A exp(2πiS/h)
So, what is the difference? Well, everyone believes the wave function is complex, and here I beg to differ. It is, but not entirely. If you recall Euler's theory of complex numbers, you will recall that exp() = -1, i.e. it is real. That means that twice a period, for the very brief instant that S = h, ψ is real and equals the wave amplitude. No need to multiply by complex conjugates then (which by itself is an interesting concept –where did this conjugate come from? Simple squaring does not eliminate the complex nature!) I then assume the wave only affects the particle when the wave is real, when it forces the particle to behave as the wave requires. To this extent, the interpretation is a little like the pilot wave.
If you accept that, and if you accept the interpretation of what the wave function means, then the reason why an electron does not radiate energy and fall into the nucleus becomes apparent, and the Uncertainty Principle and the Exclusion Principle then follow with no further assumptions. I am currently completing a draft of this that I shall self-publish. Why self-publish? That will be the subject of a later blog.
Posted by Ian Miller on Sep 23, 2013 3:30 AM BST
In the latest Chemistry World, Derek Lowe stated that keeping up with the literature is impossible, and he argued for filtering and prioritizing. I agree with his first statement, but I do not think his second option, while it is necessary right now, is optimal. That leaves open the question, what can be done about it? I think this is important, because the major chemical societies around the world are the only organizations that could conceivably help, and surely this should be of prime importance to them. So, what are the problems?
Where to put the information is not a problem because we now seem to have almost unlimited digital storage capacity. Similarly, organizing it is not a problem provided the information is correctly input, in an appropriate format with proper tags. So far, easy! Paying for it? This is more tricky, but it should not necessarily be too costly in terms of cash.
The most obvious problem is manpower, but this can also be overcome if all chemists play their part. For example, consider chemical data. The chemist writes a paper, but it would take little extra effort to put the data into some pre-agreed format for entry into the appropriate data base. Some of this is already done with "Supplementary information", but that tends to be attached to papers, which means someone wishing to find the information has to subscribe to the journal. Is there any good reason why data like melting points and spectra cannot be provided free? As an aside, this sort of suggestion would be greatly helped if we could all agree on the formatting requirements, and what tags would be required.
This does not solve everything, because there are a lot of other problems too, such as "how to make something". One thing that has always struck me is the enormous wastage of effort in things like biofuels, where very similar work tended to be repeated every crisis. Yes, I know, intellectual property rights tend to get in the way, but surely we can get around this. As an example of this problem, I recall when I was involved in a joint venture with the old ICI empire. For one of the potential products to make, I suggested a polyamide based on a particular diamine that we could, according to me, make. ICINZ took this up, sent it off to the UK, where it was obviously viewed with something approaching indifference, but they let it out to a University for them to devise a way to make said polyamide. After a year, we got back the report, they could not make the diamine, and in any case, my suggested polymer would be useless. I suggested that they rethink that last thought, and got a rude blast back, "What did I know anyway?" So, I gave them the polymer's properties. "How did I know that?" they asked. "Simple," I replied, and showed them the data in an ICI patent, at which point I asked them whether they had simply fabricated the whole thing, or had they really made this diamine? There was one of those embarrassed silences! The institution could not even remember its own work!
In principle, how to make something is clearly placed in scientific papers, but again, the problem is, how to find the data, bearing in mind no institute can afford more than a fraction of the available journals. Even worse is the problem of finding something related. "How do you get from one functional group to another in this sort of molecule with these other groups that may interfere?" is a very common problem that in principle could be solved by computer searching, but we need an agreed format for the data, and an agreement that every chemist will do their part to place what they believe to be the best examples of their own synthetic work in it. Could we get that cooperation? Will the learned societies help?
Posted by Ian Miller on Sep 16, 2013 8:07 PM BST
One concern I have as a scientist, and one I have alluded to previously, lies in the question of computations. The problem is, we have now entered an age where computers permit modeling of a complexity unknown to previous generations. Accordingly, we can tackle problems that were never possible before, and that should be good. The problem for me is, the reports of the computations tell almost nothing about how they were done, and they are so opaque that one might even question whether the people making them fully understand the underlying code. The reason is, of course, that the code is never written by one person, but by rather a team. The code is then validated by using the computations for a sequence of known examples, and during this time, certain constants of integration that are required by the process are fixed. My problem with this follows a comment that I understand was attributed to Fermi: give me five constants and I will fit any data to an elephant. Since there is a constant associated with every integration, it is only too easy to get agreement with observation.
An example that particularly irritated me was a paper that tried "evolved" programs on molecules from which they evolved (Moran et al. 2006. J. Am Chem Soc. 128: 9342-9343). What they did was to apply a number of readily available and popular molecular orbital programs to compounds that had been the strong point of molecular orbital theory, such as benzene and other arenes. What they found was that these programs "predicted" benzene to be non-planar with quite erroneous spectral signals. That such problems occur is, I suppose, inevitable, but what I found of concern is that nowhere that I know was the reason for the deviations identified, and how such propensity to error can be corrected, and once such corrections are made, what do they do to the subsequent computations that allegedly gave outputs that agreed well with observation. If the values of various constants are changed, presumably the previous agreement would disappear.
There are several reasons why I get a little grumpy over this. One example is this question of planetary formation. Computations up to about 1995 indicated that Earth would take about 100 My to accrete from planetary embryos, however, because of the problem of Moon formation, subsequent computations have reduced this to about 30 My, and assertions are made that computations reduce the formation of gas giants to a few My. My question is, what changed? There is no question that someone can make a mistake, and subsequently correct it, but surely it should be announced what the correction was. An even worse problem, from my point of view, was what followed from my PhD project, which involved, do cyclopropane electrons delocalize into adjacent unsaturation? Computations said yes, which is hardly surprising because molecular orbital theory starts by assuming it, and subsequently tries to show why bonds should be localized. If it is going to make a mistake, it will favour delocalization. The trouble was, my results, which involved varying substituents at another ring carbon and looking at Hammett relationships, said it does not.
Subsequent computational theory said that cyclopropane conjugates with adjacent unsaturation, BUT it does not transmit it, while giving no clues as to how it came to this conclusion, apart from the desire to be in agreement with the growing list of observations. Now, if theory says that conjugation involves a common wave function over the region, then the energy at all parts of that wave must be equal. (The electrons can redistribute themselves to accommodate this, but a stationary solution to the Schrödinger equation can have only one frequency.) Now, if A has a common energy with B, and B has a common energy with C, why does A not have a common energy with C? Nobody has ever answered that satisfactorily. What further irritates me is that the statement that persists in current textbooks employed the same computational programs that "proved" the existence of polywater. That was hardly a highlight, so why are we so convinced the other results are valid? So, what would I like to see? In computations, the underpinning physics, the assumptions made, and how the constants of integration were set should be clearly stated. I am quite happy to concede that computers will not make mistakes in addition, etc, but that does not mean that the instructions for the computer cannot be questioned.
Posted by Ian Miller on Sep 9, 2013 4:31 AM BST
Once again there were very few papers that came to my attention in August relating to my ebook on planetary formation. One of the few significant ones (Geochim Cosmochim Acta 120: 1-18) involved the determination of magnesium isotopes in lunar rocks, and these turned out to be identical with those of Earth and in chondrites, which lead to the conclusion that there was no significant magnesium isotopic separation throughout the accretion disk, nor during the Moon-forming event. There is a difference in magnesium isotope ratios between magnesium found in low and high titanium content basalts, but this is attributed to the actual crystallization processes of the basalts. This result is important because much is sometimes made of variation in iron isotope variations, and in variations for some other elements. The conclusion from this work is that apart from volatile elements, isotope variation is probably more due to subsequent processing than in planetary formation, and the disk was probably homogeneous.
Another point was that a planet has been found around the star GJ 504, at a distance of 43.5 A.U. from the star. Commentators have argued that such a planet is very difficult to accommodate within the standard theory. The problem is, if planets form by collision of planetesimals, and as these get bigger, collisions between embryos, the probability of collision, at least initially, is proportional to the square of the concentration of particles, and the concentration of particles depends to some power between 1 and 2, and usually taken as to the power 1.5, of the radial distance from the star. Now standard theory argues that it in our solar system, it was only around the Jupiter-Saturn distance that bodies could form reasonably quickly, and in the NICE theory, the most favoured computational route, Uranus and Neptune formed closer and had to migrate out through gravitational exchanges between them, Jupiter, Saturn, and the non-accreted planetesimals. For GJ 504, the number density of planetesimals would be such that collision probability would be about 60 times slower, so how did they form in time to form a planet four times the size of Jupiter, given that, in standard theory in our system, growth of Jupiter and Saturn was only just fast enough to get a giant?
In my opinion, the relative size compared with Jupiter is a red herring, because it also depends on when the gas disk is cleaned out by a stellar outflow. The reason is, in my model, bodies do not grow largely by collision of equally sized objects, but rather they grow by melt accretion of ices at a given temperature, and the rate of growth depends on the initial concentration of solids in the disk only, and of course, the gas inflow rate because that, together with the initial gas temperature and the position of the star within a cluster, determines the temperature, and the temperature determines the position of the planet. If GJ 504 formed under exactly the same conditions as Earth, this planet lies about midway between where we might expect Neptune and Uranus to lie, and which one it represents can only be determined by finding inner planets. In previous computations, the planet should, not form; in my theory, it is larger than would normally be expected but it is not unexpected, and there should be further planets within that orbit. Why is only one outer planet detected so far? The detection is by direct observation of a very young planet that is still glowing over red hot through gravitational energy release. The inner ones will be just as young, but the closer to the star, the harder it is to separate their light from that of the star, and, of course, some may appear very close to the star by being on certain orbital phases.
Posted by Ian Miller on Sep 1, 2013 8:58 PM BST
Nullius in verba (take nobody's word) is the motto of the Royal Society, and it should be the motto of every scientist. The problem is, it is not. An alternative way of expressing this comes from Aristotle: the fallacy ad verecundiam. Just because someone says so, that does not mean it is right. We have to ask questions of both our logic and of nature, and I am far from convinced we do this often enough. What initiated this was an article in the August Chemistry World where it was claimed that the “unexpected” properties of elements such as mercury and gold were due to relativistic effects experienced by the valence electrons.
If we assume the valence electrons occupy orbitals corresponding to the excited states of hydrogen (i.e. simple solutions of the Schrödinger equation) the energy E is given by E = Z2Eo/n2h2. Here, Eo is the energy given by the Schrödinger equation, n gives the quanta of action associated with the state, and Z is a term that at one level is an empirical correction. Thus without this, the 6s electron in gold would have an energy 1/36 that of hydrogen, and that is just plain wrong. The usual explanation is that since the wave function goes right to the nucleus, there is a probability that the electron is near the nucleus, in which case it experiences greater electric fields. For mercury and gold, these are argued to be sufficient to lead to relativistic mass enhancement (or spacetime dilation, however you wish to present the effects), and these alter the energy sufficiently that gold has the colour it has, and both mercury and gold have properties unexpected from simple extrapolation from earlier elements in their respective columns in the periodic table. The questions are, is this correct, or are there alternative interpretations for the properties of these elements? Are we in danger of simply hanging our hat on a convenient peg without asking, is it the right one? I must confess that I dislike the relativistic interpretation, and here are my reasons.
The first involves wave-particle duality. Either the motion complies with wave properties or it does not, and the two-slit experiment is fairly good evidence that it does. Now a wave consistent with the Schrödinger equation can have only one frequency, hence only one overall energy. If a wave had two frequencies, it would self-interfere, or at the very least would not comply with the Schrödinger equation, and hence you could not claim to be using standard quantum mechanics. Relativistic effects must be consistent with the expectation energy of the particle, and should be negligible for any valence electron.
The second relates to how the relativistic effects are calculated. This involves taking small regions of space and assigning relativistic velocities to them. That means we are assigning specific momentum enhancements to specific regions of space, and surely that violates the Uncertainty Principle. The Uncertainty Principle argues the uncertainty of the position multiplied by the uncertainty of the momentum is greater or equal to the quantum of action. In fact it may be worse than that, because when we have stationary states with nh quanta, we do not know that that is not the total uncertainty. More on this in a later blog.
On a more personal note, I am annoyed because I have published an alternative explanation [ Aust. J. Phys. 40 : 329 -346 (1987)] that proposes that the wave functions of the heavier elements do not correspond exactly to the excited states of hydrogen, but rather are composite functions, some of which have reduced numbers of nodes. ( The question, “how does an electron cross a nodal surface?” disappears, because the nodes disappear.) The concept is too complicated to explain fully here, however I would suggest two reasons why it may be relevant.
The first is, if we consider the energies of the ground states of atoms in a column of elements, my theory predicts the energies quite well at each end of a row, but for elements nearer the centre, there are more discrepancies, and they alternate in sign, depending on whether n is odd or even. The series copper, silver and gold probably show the same effect, but more strongly. The “probably" is because we need a fourth member to be sure. However, the principle remains: taking two points and extrapolating to a third is invalid unless you can prove the points should lie on a known line. If there are alternating differences, then the method is invalid. Further, within this theory, gold is the element that agrees with theory the best. That does not prove the absence of relativistic effects, but at least it casts suspicion.
The second depends on calculations of the excited states. For gold, the theory predicts the outcomes rather well, especially for the d states, which involve the colour problem. Note that copper is also coloured. (I shall post a figure from the paper later. I thought I had better get agreement on copyright before I start posting it, and as yet I have had no response. The whole paper should be available as a free download, though.) The function is not exact, and for gold the p states are more the villains, and it is obvious that something is not quite right, or, as I believe, has been left out. However, the point I would make is the theoretical function depends only on quantum numbers, it has no empirical validation procedures and depends only on the nodal structure of the waves. The only interaction included is the electron nucleus electric field so some discrepancies might be anticipated. Now, obviously you should not take my word either, but when somebody else produces an alternative explanation, in my opinion we should at least acknowledge its presence rather than simply ignore it.
Posted by Ian Miller on Aug 26, 2013 3:58 AM BST
Some time ago I had posts on biofuels, and I covered a number of processes, but for certain reasons (I had been leading a research program for a company on this topic, and I thought I should lay off until I saw where that was going) I omitted what I believe is more optimal. The process I had eventually landed on is hydrothermal liquefaction, for reasons as follows.
The first problem with biomass is that it is dispersed, and it does not travel easily. How would you process forestry wastes? The shapes are ugly, and if you chip onsite, you are shipping a lot of air. If you are processing algae, either you waste a lot of energy drying it, or you ship a lot of water. There is no way around this problem initially, so you must try to make the initial travel distance as short as possible. Now, if you use a process such as Fischer Tropsch, you need such a large amount of biomass that you must harvest over a huge area, and now your transport costs rise very fast, as does the amount of fuel you burn shipping it. Accordingly, there are significant diseconomies of scale. The problem is, as you decrease the throughput, you lose processing economies of scale. What liquefaction does is reduce the volume considerably, and in turn, liquids are very much easier to transport. But to get that advantage, you have to process relatively smaller volumes. Transportation costs are always less for transport by barge, so that gives marine algae an increased desirability factor.
A second advantage of liquefaction is that you can introduce just about any feedstock, in any mix, although there are disadvantages in having too much variation. Liquefaction produces a number of useful chemicals, but they vary depending on the feedstock, and to be useful they have to be isolated and purified, and accordingly, the more different feedstocks included, the harder this problem. Ultimately, there will be the issue of “how to sell such chemicals” because the fuels market is enormously larger than that for chemicals, but initially the objective is to find ways to maximize income while the technology is made more efficient. No technology is introduced in its final form.
Processing frequently requires something else. Liquefaction has an advantage here too. If you were to hydrogenate, you have to make hydrogen, and that in turn is an unnecessary expense unless location gives you an advantage, e.g. hydrogen is being made somewhere nearby for some other purpose. In principle, liquefaction only requires water, although some catalysts are often helpful. Such catalysts can be surprisingly cheap, nevertheless they still need to be recovered, and this raises the more questionable issue relating to liquefaction: the workup. If carried out properly, the water waste volumes can be reasonably small, at least in theory, but that theory has yet to be properly tested. One advantage is that water can be recycled through the process, in which case a range of chemical impurities get recycled, where they condense further. There will be a stream of unusable phenolics, and these will have to be hydrotreated somewhere else.
The advantages are reasonably clear. There are some hydrocarbons produced that can be used as drop-in fuels following distillation. The petrol range is usually almost entirely aromatic, with high octane numbers. The diesel range from lipids has a very high cetane number. There are a number of useful chemicals made, and the technology should operate tolerably cheaply on a moderate scale, whereupon it makes liquids that can be cheaply transported elsewhere. In principle, the technology is probably the most cost-effective.
The disadvantages are also reasonably clear. The biggest is that the technology has not been demonstrated at a reasonable scale, so the advantages are somewhat theoretical. The costs may escalate with the workup, and the chemicals obtained, while potentially very useful, e.g. for polymers, are often somewhat different from the main ones currently used now, so their large-scale use requires market acceptance of materials with different properties.
Given the above, what should be done? As with some of the other options, in my opinion there is insufficient information to decide, so someone needs to build a bigger plant to see whether it lives up to expectations. Another point is that unlike oil processing, it is unlikely that any given technology will be the best in all circumstances. We may have to face a future in which there are many different options in play.
Posted by Ian Miller on Aug 19, 2013 5:03 AM BST
I devoted the last post to the question, could we provide biofuels? By that, I mean, is the land available. I cited a paper in which it showed fairly conclusively that growing corn to make fuel is not really the answer, because to get the total US fuel consumption, based on that paper you would need to multiply the total area of existing ground under cultivation in the US by a factor of 17. And you still have to eat. Of course, the US could still function reasonably well while consuming significantly less liquid fuel, but the point remains that we still need liquid fuels. The authors of this paper could also have got this wrong and have made an error in their calculations, but such errors go either way, and as areas get larger, the errors are more likely to be unfavourable than favourable because the transport costs of servicing such large areas have to be taken into account. On the other hand, the area required for obtaining fuels from microalgae is less than five per cent of current area. Again, that is probably an underestimate, although, as I argued, a large amount of microalgae could be obtained from sewage treatment plants, and they are currently in place.
One problem with growing algae, however, is you need water, and in some places, water availability is a problem (although not usually for sewage treatment). Water itself is hardly a scarce resource, as anyone who has flown over the Pacific gradually realizes. The argument that it is salty is beside the point as far as algae go because there are numerous algae that grow quite nicely in seawater. One of what I consider to be the least well-recognized biofuel projects from the 1970s energy crisis was carried out by the US navy. What they did was to grow Macrocystis on rafts in deep seawater. The basic problem with seawater far from a shore is that it is surprisingly deficient in a number of nutrients, and this was overcome by raising water from the ocean floor. Macrocystis is one of the fastest growing plants, in fact under a microscope you can watch cell division proceeding regularly. You can also mow it, so frequent replanting is not necessary. The US navy showed this was quite practical, at least in moderately deep water. (You would not want to raise nutrients from the bottom of the Kermadec trench, for example, but there is plenty of ocean that does not go to great depths.)
The experiment itself eventually failed and the rafts were lost in a storm, in part possibly because they were firmly anchored and the water-raising pipe could not stand the bending forces. That, however, is no reason to write it off. I know of no new technology that was implemented without improvements on the first efforts at the pilot/demonstration level. The fact is, problems can only be solved once they are recognized, and while storms at sea are reasonably widely appreciated, that does not mean that the first engineering effort to deal with them is going to be the full and final one. Thus the deep pipe does not have to be rigid, and it can be raised free of obstructions. Similarly, the rafts, while some form of anchoring is desirable, do not have to be rigidly anchored. So, why did the US Navy give up? The reasons are not entirely clear to me, but I rather suspect that the fact that oil prices had dropped to the lowest levels ever in real terms may have had something to do with it.
Posted by Ian Miller on Aug 12, 2013 4:55 AM BST
In previous posts I have discussed the possibility of biofuels, and the issue of greenhouse gases. One approach to the problem of greenhouse gases, or at least the excess of carbon dioxide, is to make biofuels. The carbon in the fuels comes from the atmosphere, so at least we slow down the production of greenhouse gases, and additionally we address, at least partially, the problem of transport fuels. Sooner or later we shall run out of oil, so even putting aside the greenhouse problem, we need a substitute. The problem then is, how to do it?
The first objections we see come from what I believe is faulty analysis and faulty logic. Who has not seen the argument: "Biofuels are useless? All you have to do is to see the energy balances and land requirements for corn." This argument is of the "straw man" type; you choose a really bad example and generalize. An alternative was published recently in Biomass and Bioenergy. 56: 600-606. These authors provided an analysis of the land area required to provide 50% of the US transport fuels. Corn came in at a massive 846% of current US cropping area, i.e. to get the fuels, the total US cropping area needed to be multiplied by a factor greater than 8. Some might regard that as impractical! However, microalgae came in at between 1.1 and 2.5% of US cropping area. That is still a lot of area, but it does seem to be more manageable.
There is also the question of how to grow things, fuel needed, fertilizer needed, pesticides needed, etc. Corn here comes out very poorly, in fact some have argued that you put more energy in the form of useful work in growing it than you get out. (The second law bites again!) Now, I must show my bias and confess to having participated in a project to obtain chemicals and fuels from microalgae grown in sewage treatment water. It grows remarkably easily: no fertilizer requirements, no need to plant it or look after it; it really does grow itself, although there may be a case for seeding the growing stream to get a higher yield of desirable algae. Further, the algae removes much of the nitrogen and phosphate that would otherwise be an environmental nuisance, although that is not exactly a free run because when finished processing, the phosphates in particular remain. However, good engineering can presumably end up with a process stream that can be used for fertilizer.
One issue is that microalgae in a nutrient rich environment, and particularly in a nitrogen rich environment, tend to reproduce as rapidly as possible. If starved of nitrogen, they tend to use the photochemical energy and store its reserves of lipids. It is possible, at least with some species, to reach 75% lipid content, while rapidly growing microalgae may have only 5% extractible lipids.
That leaves the choice of process. My choice, biased that I am, uses hydrothermal liquefaction. Why? Well, first, harvesting microalgae is not that easy, and a lot of energy can be wasted drying it. With hydrothermal liquefaction, you need an excess of water, so "all you have to do" is to concentrate the algae to a paste. The quotation marks are to indicate that even that is easier said than done. As an aside, simple extraction of the wet algae with an organic solvent is not a good idea: you can get some really horrible emulsions. Another advantage of hydrothermal liquefaction is, if done properly, not only do you get fuel from the lipids, but also from the phospholipids, and some other fatty acid species that are otherwise difficult to extract. Finally, you end up with a string of interesting chemicals, and in principle, the chemicals, which are rich in nitrogen heterocycles, would in the long run be worth far more than the fuel content.
The fuel is interesting as well. If done under appropriate conditions, the lipid acids mainly either decarboxylate or decarbonylate, to form linear alkanes or alkenes one carbon atom short. There is a small amount of the obvious diketone formed as well. The polyunsaturated acids fragment, and coupled with some deaminated aminoacid fragments, make toluene, xylenes, and interestingly enough, ethyl benzene and styrene. Green polystyrene is plausible.
As you may gather, I am reasonably enthusiastic about this concept, because it simultaneously addresses a number of problems: greenhouse gases, "green" chemicals, liquid fuels, and sewage treatment, with perhaps phosphate recovery thrown in. There are a number of other variations on this theme; the point of what I am trying to say is there are things we can do. I believe the answer to the question is yes. Certainly there are more things to do, but no technology is invented mature.
Posted by Ian Miller on Aug 5, 2013 5:17 AM BST
1 2 3 4 5 |
d76f4fd6222542f5 | All Physics Faculty Publications
Document Type
Journal/Book Title/Conference
Physical Review D
American Physical Society
Publication Date
First Page
Last Page
The Wheeler-DeWitt equation of vacuum geometrodynamics is turned into a Schrödinger equation by imposing the normal Gaussian coordinate conditions with Lagrange multipliers and then restoring the coordinate invariance of the action by parametrization. This procedure corresponds to coupling the gravitational field to a reference fluid. The source appearing in the Einstein law of gravitation has the structure of a heat-conducting dust. When one imposes only the Gaussian time condition but not the Gaussian frame conditions, the heat flow vanishes and the dust becomes incoherent. The canonical description of the fluid uses the Gaussian coordinates and their conjugate momenta as the fluid variables. The energy density and the momentum density of the fluid turn out to be homogeneous linear functions of such momenta. This feature guarantees that the Dirac constraint quantization of the gravitational field coupled to the Gaussian reference fluid leads to a functional Schrödinger equation in Gaussian time. Such an equation possesses the standard positive-definite conserved norm.
For a heat-conducting fluid, the states depend on the metric induced on a given hypersurface; for an incoherent dust, they depend only on geometry. It seems natural to interpret the integrand of the norm integral as the probability density for the metric (or the geometry) to have a definite value on a hypersurface specified by the Gaussian clock. Such an interpretation fails because the reference fluid is realistic only if its energy-momentum tensor satisfies the familiar energy conditions. In the canonical theory, the energy conditions become additional constraints on the induced metric and its conjugate momentum. For a heat-conducting dust, the total system of constraints is not first class and cannot be implemented in quantum theory. As a result, the Gaussian coordinates are not determined by physical properties of a realistic material system and the probability density for the metric loses thereby its operational significance. For an incoherent dust, the energy conditions and the dynamical constraints are first class and can be carried over into quantum theory. However, because the geometry operator considered as a multiplication operator does not commute with the energy conditions, the integrand of the norm integral still does not yield the probability density. The interpretation of the Schrödinger geometrodynamics remains viable, but it requires a rather complicated procedure for identifying the fundamental observables. All our considerations admit generalization to other coordinate conditions and other covariant field theories.
Originally published by the American Physical Society. Publisher's PDF can be accessed through Physical Review D - Particles, Fields, Gravitation, and Cosmology. Note: Charles Torre was at the Florida Institute of Technology at the time of publication.
Included in
Physics Commons |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.